Jan 30 05:07:43 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 05:07:43 crc restorecon[4751]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:43 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 05:07:44 crc restorecon[4751]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 05:07:45 crc kubenswrapper[4931]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:07:45 crc kubenswrapper[4931]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 05:07:45 crc kubenswrapper[4931]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:07:45 crc kubenswrapper[4931]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:07:45 crc kubenswrapper[4931]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:07:45 crc kubenswrapper[4931]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.143826 4931 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148753 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148771 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148777 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148782 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148787 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148793 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148799 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148804 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148809 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148814 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148819 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148824 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148828 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148834 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148840 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148845 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148850 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148855 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148861 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148866 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148871 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148875 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148880 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148884 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148889 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148893 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148898 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148903 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148914 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148919 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148925 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148929 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148934 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148939 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148944 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148949 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148954 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148960 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148967 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148972 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148980 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148986 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148991 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.148997 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149002 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149007 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149013 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149018 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149023 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149028 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149033 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149038 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149043 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149049 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149055 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149062 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149068 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149073 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149078 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149083 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149088 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149094 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149099 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149104 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149109 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149114 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149119 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149124 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149129 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149133 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.149140 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151265 4931 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151287 4931 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151309 4931 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151324 4931 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151332 4931 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151338 4931 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151350 4931 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151358 4931 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151363 4931 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151369 4931 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151377 4931 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151382 4931 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151388 4931 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151393 4931 flags.go:64] FLAG: --cgroup-root="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151399 4931 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151405 4931 flags.go:64] FLAG: --client-ca-file="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151411 4931 flags.go:64] FLAG: --cloud-config="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151432 4931 flags.go:64] FLAG: --cloud-provider="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151438 4931 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151446 4931 flags.go:64] FLAG: --cluster-domain="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151452 4931 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151458 4931 flags.go:64] FLAG: --config-dir="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151464 4931 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151470 4931 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151479 4931 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151485 4931 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151491 4931 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151497 4931 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151503 4931 flags.go:64] FLAG: --contention-profiling="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151508 4931 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151514 4931 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151520 4931 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151526 4931 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151533 4931 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151539 4931 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151545 4931 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151551 4931 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151558 4931 flags.go:64] FLAG: --enable-server="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151563 4931 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151571 4931 flags.go:64] FLAG: --event-burst="100" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151577 4931 flags.go:64] FLAG: --event-qps="50" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151583 4931 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151589 4931 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151594 4931 flags.go:64] FLAG: --eviction-hard="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151601 4931 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151607 4931 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151612 4931 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151618 4931 flags.go:64] FLAG: --eviction-soft="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151624 4931 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151629 4931 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151635 4931 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151640 4931 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151647 4931 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151652 4931 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151658 4931 flags.go:64] FLAG: --feature-gates="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151665 4931 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151671 4931 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151677 4931 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151682 4931 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151688 4931 flags.go:64] FLAG: --healthz-port="10248" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151693 4931 flags.go:64] FLAG: --help="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151699 4931 flags.go:64] FLAG: --hostname-override="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151704 4931 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151710 4931 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151716 4931 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151723 4931 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151728 4931 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151736 4931 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151742 4931 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151747 4931 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151753 4931 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151758 4931 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151764 4931 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151770 4931 flags.go:64] FLAG: --kube-reserved="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151775 4931 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151781 4931 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151789 4931 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151794 4931 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151800 4931 flags.go:64] FLAG: --lock-file="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151806 4931 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151811 4931 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151817 4931 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151826 4931 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151832 4931 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151837 4931 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151843 4931 flags.go:64] FLAG: --logging-format="text" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151849 4931 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151855 4931 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151861 4931 flags.go:64] FLAG: --manifest-url="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151866 4931 flags.go:64] FLAG: --manifest-url-header="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151874 4931 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151879 4931 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151886 4931 flags.go:64] FLAG: --max-pods="110" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151893 4931 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151898 4931 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151905 4931 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151911 4931 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151916 4931 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151922 4931 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151928 4931 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151941 4931 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151947 4931 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151952 4931 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151958 4931 flags.go:64] FLAG: --pod-cidr="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151964 4931 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151975 4931 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151980 4931 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151986 4931 flags.go:64] FLAG: --pods-per-core="0" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151991 4931 flags.go:64] FLAG: --port="10250" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.151997 4931 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152004 4931 flags.go:64] FLAG: --provider-id="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152010 4931 flags.go:64] FLAG: --qos-reserved="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152018 4931 flags.go:64] FLAG: --read-only-port="10255" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152023 4931 flags.go:64] FLAG: --register-node="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152029 4931 flags.go:64] FLAG: --register-schedulable="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152035 4931 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152045 4931 flags.go:64] FLAG: --registry-burst="10" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152051 4931 flags.go:64] FLAG: --registry-qps="5" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152057 4931 flags.go:64] FLAG: --reserved-cpus="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152064 4931 flags.go:64] FLAG: --reserved-memory="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152071 4931 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152077 4931 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152083 4931 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152089 4931 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152095 4931 flags.go:64] FLAG: --runonce="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152101 4931 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152107 4931 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152113 4931 flags.go:64] FLAG: --seccomp-default="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152121 4931 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152127 4931 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152134 4931 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152140 4931 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152146 4931 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152152 4931 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152158 4931 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152164 4931 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152170 4931 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152176 4931 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152182 4931 flags.go:64] FLAG: --system-cgroups="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152188 4931 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152200 4931 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152206 4931 flags.go:64] FLAG: --tls-cert-file="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152212 4931 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152219 4931 flags.go:64] FLAG: --tls-min-version="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152225 4931 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152231 4931 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152236 4931 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152242 4931 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152250 4931 flags.go:64] FLAG: --v="2" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152259 4931 flags.go:64] FLAG: --version="false" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152268 4931 flags.go:64] FLAG: --vmodule="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152276 4931 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152283 4931 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152413 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152437 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152442 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152448 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152453 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152458 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152495 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152501 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152506 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152511 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152516 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152520 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152525 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152529 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152535 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152541 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152546 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152551 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152556 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152561 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152566 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152570 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152575 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152579 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152584 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152591 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152596 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152601 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152607 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152613 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152618 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152624 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152629 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152634 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152639 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152644 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152648 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152652 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152657 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152663 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152668 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152673 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152679 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152684 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152689 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152694 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152699 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152704 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152709 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152714 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152720 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152727 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152733 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152738 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152744 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152751 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152756 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152762 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152767 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152774 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152779 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152785 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152791 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152796 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152801 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152806 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152812 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152820 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152825 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152830 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.152835 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.152856 4931 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.167410 4931 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.167481 4931 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167637 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167655 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167664 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167674 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167686 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167698 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167708 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167718 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167727 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167736 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167745 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167753 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167761 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167771 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167779 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167787 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167795 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167802 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167811 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167819 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167827 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167835 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167843 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167851 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167859 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167867 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167876 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167884 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167892 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167899 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167908 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167916 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167926 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167936 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167946 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167955 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167963 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167972 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167980 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.167990 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168000 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168009 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168017 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168025 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168033 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168040 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168049 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168056 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168065 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168072 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168081 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168089 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168097 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168105 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168112 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168120 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168128 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168140 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168150 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168159 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168168 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168178 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168187 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168196 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168204 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168213 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168222 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168231 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168239 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168248 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168257 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.168270 4931 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168551 4931 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168566 4931 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168575 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168585 4931 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168597 4931 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168606 4931 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168615 4931 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168623 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168631 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168640 4931 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168649 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168657 4931 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168665 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168675 4931 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168684 4931 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168693 4931 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168701 4931 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168710 4931 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168719 4931 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168728 4931 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168737 4931 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168745 4931 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168753 4931 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168761 4931 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168769 4931 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168779 4931 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168788 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168796 4931 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168803 4931 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168811 4931 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168821 4931 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168832 4931 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168840 4931 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168849 4931 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168858 4931 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168866 4931 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168877 4931 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168886 4931 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168895 4931 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168903 4931 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168912 4931 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168921 4931 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168930 4931 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168938 4931 feature_gate.go:330] unrecognized feature gate: Example Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168946 4931 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168954 4931 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168961 4931 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168969 4931 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168977 4931 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168985 4931 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.168992 4931 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169000 4931 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169008 4931 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169015 4931 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169023 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169031 4931 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169038 4931 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169047 4931 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169055 4931 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169063 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169070 4931 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169078 4931 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169086 4931 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169094 4931 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169101 4931 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169109 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169116 4931 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169124 4931 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169132 4931 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169140 4931 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.169148 4931 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.169160 4931 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.170988 4931 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.177579 4931 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.177711 4931 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.179976 4931 server.go:997] "Starting client certificate rotation" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.180030 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.183605 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 10:41:40.527569933 +0000 UTC Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.183715 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.207300 4931 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.210792 4931 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.211576 4931 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.231039 4931 log.go:25] "Validated CRI v1 runtime API" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.276835 4931 log.go:25] "Validated CRI v1 image API" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.279362 4931 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.287134 4931 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-05-03-06-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.287211 4931 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.318297 4931 manager.go:217] Machine: {Timestamp:2026-01-30 05:07:45.313151778 +0000 UTC m=+0.683062095 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:babf1111-baa6-43bf-8e98-8707b9d18072 BootID:9d83649b-6a34-4b83-bc96-3ff1ac14c758 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ce:65:43 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ce:65:43 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:06:ab:4b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:46:dd:b6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6b:76:50 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:61:3d:06 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:48:9c:e3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:23:04:bc:a6:8c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:cc:3f:32:f5:a0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.318786 4931 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.318971 4931 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.319447 4931 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.319772 4931 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.319826 4931 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.320135 4931 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.320179 4931 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.320787 4931 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.320832 4931 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.321746 4931 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.321843 4931 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.327458 4931 kubelet.go:418] "Attempting to sync node with API server" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.327485 4931 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.327506 4931 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.327523 4931 kubelet.go:324] "Adding apiserver pod source" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.327540 4931 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.332384 4931 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.333644 4931 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.334256 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.334258 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.334417 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.334473 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.336670 4931 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339048 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339070 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339079 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339087 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339099 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339108 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339116 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339137 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339146 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339155 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339172 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339180 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339204 4931 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.339644 4931 server.go:1280] "Started kubelet" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.344923 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.345261 4931 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.345200 4931 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.346823 4931 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:07:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.351593 4931 server.go:460] "Adding debug handlers to kubelet server" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.352046 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.352100 4931 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.358472 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:29:52.187862943 +0000 UTC Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.358618 4931 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.358971 4931 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.358845 4931 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.358831 4931 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.357359 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f69f1afef1f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 05:07:45.339612989 +0000 UTC m=+0.709523246,LastTimestamp:2026-01-30 05:07:45.339612989 +0000 UTC m=+0.709523246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.359836 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.360947 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.360995 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.368473 4931 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.368587 4931 factory.go:55] Registering systemd factory Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.368612 4931 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.369082 4931 factory.go:153] Registering CRI-O factory Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.369127 4931 factory.go:221] Registration of the crio container factory successfully Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.369168 4931 factory.go:103] Registering Raw factory Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.369189 4931 manager.go:1196] Started watching for new ooms in manager Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.369919 4931 manager.go:319] Starting recovery of all containers Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.371788 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.371882 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.371918 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372008 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372039 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372069 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372099 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372126 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372159 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372192 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372222 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372250 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372279 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372381 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372411 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372477 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372509 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372548 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372576 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372637 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372666 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372693 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372725 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372753 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372783 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372811 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372846 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372876 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372907 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372935 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.372972 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373000 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373025 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373050 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373078 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373105 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373131 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373159 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373187 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373215 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373246 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373278 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373306 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373332 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373362 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373389 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373414 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373475 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373502 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373531 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373559 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373586 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373622 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373653 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373682 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373711 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373738 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373773 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373795 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373816 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373840 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373864 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373890 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373915 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373941 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.373974 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374001 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374028 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374056 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374082 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374110 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374136 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374163 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374194 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374222 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374247 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374274 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374299 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374325 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374350 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374378 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374401 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374457 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374490 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374512 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374537 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374563 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374592 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374621 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374651 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374680 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374705 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374730 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374754 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374779 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374803 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374827 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374856 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374884 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374910 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374934 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374959 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.374984 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375012 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375065 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375100 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375128 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375158 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375186 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375216 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375243 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375270 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375298 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375329 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375355 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375383 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375408 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375471 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375498 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375524 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375551 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375580 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375606 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375634 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375659 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375685 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375712 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375736 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375759 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375793 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375815 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375839 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375861 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375884 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375907 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375934 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375956 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.375982 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376005 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376031 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376055 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376080 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376110 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376131 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376153 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376175 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376199 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376221 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376245 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376270 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376337 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376366 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376393 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376416 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376472 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376496 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376520 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376543 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376566 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376588 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376610 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376634 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376658 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376686 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376712 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376739 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376765 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376792 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376817 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376848 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376877 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376905 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376933 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376960 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.376986 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377015 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377043 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377069 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377099 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377151 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377179 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377206 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377232 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377259 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377285 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377314 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377341 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.377371 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380620 4931 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380668 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380692 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380714 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380734 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380755 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380774 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380794 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380813 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380833 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380863 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380883 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380903 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380925 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380946 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380965 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.380983 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.381003 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.381022 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.381039 4931 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.381058 4931 reconstruct.go:97] "Volume reconstruction finished" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.381072 4931 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.405734 4931 manager.go:324] Recovery completed Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.416754 4931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.420587 4931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.420655 4931 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.420701 4931 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.420846 4931 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.423384 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.423510 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.424605 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.429177 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.429235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.429255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.431930 4931 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.432088 4931 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.432211 4931 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.453658 4931 policy_none.go:49] "None policy: Start" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.455310 4931 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.455377 4931 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.459938 4931 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.521483 4931 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.523068 4931 manager.go:334] "Starting Device Plugin manager" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.523167 4931 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.523199 4931 server.go:79] "Starting device plugin registration server" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.524029 4931 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.524053 4931 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.524475 4931 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.524601 4931 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.524617 4931 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.533907 4931 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.561910 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.598011 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f69f1afef1f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 05:07:45.339612989 +0000 UTC m=+0.709523246,LastTimestamp:2026-01-30 05:07:45.339612989 +0000 UTC m=+0.709523246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.625126 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.626334 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.626364 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.626375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.626404 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.626724 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.722416 4931 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.722828 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.727271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.727347 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.727369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.727682 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.728065 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.728151 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729399 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729582 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729752 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729910 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.729976 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.731175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.731389 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.731460 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.731541 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.731578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.731597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.731839 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.732090 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.732150 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.745173 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.745300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.745332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.745834 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.745953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.746013 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.746048 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.746708 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.746810 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.747448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.747496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.747590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.748023 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.748094 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.748661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.748728 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.748751 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.749826 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.749892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.749915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: W0130 05:07:45.773834 4931 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/cpuset.cpus.effective": open /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/cpuset.cpus.effective: no such device Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.787696 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.787793 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.787833 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.787870 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.787906 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.787937 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.787998 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788032 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788068 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788104 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788138 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788173 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788204 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.788333 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.827368 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.829089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.829142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.829163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.829202 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.830123 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.890938 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891027 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891062 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891095 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891128 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891158 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891189 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891187 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891219 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891256 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891254 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891312 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891335 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891286 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891345 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891343 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891380 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891370 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891463 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891403 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891461 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891539 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891653 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891692 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891681 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891758 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891785 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891845 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.891935 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: I0130 05:07:45.892047 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:45 crc kubenswrapper[4931]: E0130 05:07:45.963037 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.069531 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.092024 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.103403 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.125008 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.130284 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8ed59f42dfaaf99e9b9d7c2f3d6cd22ca5c6357d0efc979f70f29613bee11d7b WatchSource:0}: Error finding container 8ed59f42dfaaf99e9b9d7c2f3d6cd22ca5c6357d0efc979f70f29613bee11d7b: Status 404 returned error can't find the container with id 8ed59f42dfaaf99e9b9d7c2f3d6cd22ca5c6357d0efc979f70f29613bee11d7b Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.131915 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dd16176df9a25b194f64bbc285ca871fe83a9157995e45ac99634671f9447392 WatchSource:0}: Error finding container dd16176df9a25b194f64bbc285ca871fe83a9157995e45ac99634671f9447392: Status 404 returned error can't find the container with id dd16176df9a25b194f64bbc285ca871fe83a9157995e45ac99634671f9447392 Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.137054 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.141248 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8438d93fed226e78450fc88faaa36b5308080acfa232ce5075cab3515e4413d1 WatchSource:0}: Error finding container 8438d93fed226e78450fc88faaa36b5308080acfa232ce5075cab3515e4413d1: Status 404 returned error can't find the container with id 8438d93fed226e78450fc88faaa36b5308080acfa232ce5075cab3515e4413d1 Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.149230 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b122131d6ceb64d1480bf519eefdb964b30420cf7e966c732893e7759853bffb WatchSource:0}: Error finding container b122131d6ceb64d1480bf519eefdb964b30420cf7e966c732893e7759853bffb: Status 404 returned error can't find the container with id b122131d6ceb64d1480bf519eefdb964b30420cf7e966c732893e7759853bffb Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.156128 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4454be09f6ce2236d995dd359b2f7d824be5104d246fe37e1f4005fe65439ce0 WatchSource:0}: Error finding container 4454be09f6ce2236d995dd359b2f7d824be5104d246fe37e1f4005fe65439ce0: Status 404 returned error can't find the container with id 4454be09f6ce2236d995dd359b2f7d824be5104d246fe37e1f4005fe65439ce0 Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.231003 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.234659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.234732 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.234756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.234805 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 05:07:46 crc kubenswrapper[4931]: E0130 05:07:46.235678 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.346001 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:46 crc kubenswrapper[4931]: E0130 05:07:46.346117 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.346685 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.358847 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:35:35.561640113 +0000 UTC Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.427080 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b122131d6ceb64d1480bf519eefdb964b30420cf7e966c732893e7759853bffb"} Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.428685 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8438d93fed226e78450fc88faaa36b5308080acfa232ce5075cab3515e4413d1"} Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.432143 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd16176df9a25b194f64bbc285ca871fe83a9157995e45ac99634671f9447392"} Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.433573 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8ed59f42dfaaf99e9b9d7c2f3d6cd22ca5c6357d0efc979f70f29613bee11d7b"} Jan 30 05:07:46 crc kubenswrapper[4931]: I0130 05:07:46.435029 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4454be09f6ce2236d995dd359b2f7d824be5104d246fe37e1f4005fe65439ce0"} Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.666671 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:46 crc kubenswrapper[4931]: E0130 05:07:46.666840 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:46 crc kubenswrapper[4931]: E0130 05:07:46.764201 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.852735 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:46 crc kubenswrapper[4931]: E0130 05:07:46.852861 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:46 crc kubenswrapper[4931]: W0130 05:07:46.880008 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:46 crc kubenswrapper[4931]: E0130 05:07:46.880882 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.036240 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.037726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.037768 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.037779 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.037843 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 05:07:47 crc kubenswrapper[4931]: E0130 05:07:47.038173 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.347268 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.359309 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:19:25.102159452 +0000 UTC Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.408871 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 05:07:47 crc kubenswrapper[4931]: E0130 05:07:47.410847 4931 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.442706 4931 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4" exitCode=0 Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.442884 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.442865 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4"} Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.444610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.444680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.444703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.447694 4931 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8e913d425338029033aa1073be9668185b9f9c4dbb2560466d086b52aa6ce17c" exitCode=0 Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.447780 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8e913d425338029033aa1073be9668185b9f9c4dbb2560466d086b52aa6ce17c"} Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.447805 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.449048 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.449089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.449104 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.451006 4931 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733" exitCode=0 Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.451296 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.451292 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733"} Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.453415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.453546 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.453568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.454088 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a"} Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.454119 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07"} Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.456940 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0" exitCode=0 Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.456972 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0"} Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.457108 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.458263 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.458298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.458389 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.466304 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.467828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.467865 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:47 crc kubenswrapper[4931]: I0130 05:07:47.467878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:48 crc kubenswrapper[4931]: W0130 05:07:48.311146 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:48 crc kubenswrapper[4931]: E0130 05:07:48.311261 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.346906 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.359589 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:06:22.504439466 +0000 UTC Jan 30 05:07:48 crc kubenswrapper[4931]: E0130 05:07:48.365491 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="3.2s" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.468875 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ddb77e9defc8c4121eae34daeca1948ee8aef2d6c884fb05b2a5c53e85cbe9c8"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.469003 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.470076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.470095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.470103 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.473042 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.473069 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.473078 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.473139 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.473974 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.473989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.473997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.477330 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.477400 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.477451 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.479157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.479183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.479195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.481151 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.481214 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.481233 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.483070 4931 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7" exitCode=0 Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.483108 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7"} Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.483178 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.483807 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.483835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.483846 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:48 crc kubenswrapper[4931]: W0130 05:07:48.601117 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.179:6443: connect: connection refused Jan 30 05:07:48 crc kubenswrapper[4931]: E0130 05:07:48.601231 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.179:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.639725 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.643782 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.643818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.643828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:48 crc kubenswrapper[4931]: I0130 05:07:48.643853 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 05:07:48 crc kubenswrapper[4931]: E0130 05:07:48.646332 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.179:6443: connect: connection refused" node="crc" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.360472 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:52:39.314119283 +0000 UTC Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.489664 4931 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c" exitCode=0 Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.489762 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c"} Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.489818 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.491735 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.491776 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.491788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.494265 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.494734 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64"} Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.494767 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944"} Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.494868 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.494953 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.495019 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.495357 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.494880 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.495843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.495869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.495880 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496482 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496360 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496623 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496624 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:49 crc kubenswrapper[4931]: I0130 05:07:49.496726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.361033 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:17:33.985053481 +0000 UTC Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.460911 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.471682 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.504769 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.505573 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0"} Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.505636 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57"} Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.505659 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0"} Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.505782 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.506572 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.507247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.507284 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.507300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.507606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.507682 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:50 crc kubenswrapper[4931]: I0130 05:07:50.507703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.361651 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:38:44.348541321 +0000 UTC Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.412938 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.513132 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca"} Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.513342 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.513273 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.513355 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e"} Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.513231 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515141 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515268 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515295 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515719 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.515850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.774791 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.846933 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.849037 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.849092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.849106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:51 crc kubenswrapper[4931]: I0130 05:07:51.849142 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.362551 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:56:38.316154103 +0000 UTC Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.393946 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.478289 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.516107 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.516149 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.516239 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.517790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.517831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.517844 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.518999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.519049 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.519080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.519049 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.519156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:52 crc kubenswrapper[4931]: I0130 05:07:52.519169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.364530 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:04:10.730996966 +0000 UTC Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.501872 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.518646 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.518679 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.520286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.520340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.520358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.520354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.520578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:53 crc kubenswrapper[4931]: I0130 05:07:53.520613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:54 crc kubenswrapper[4931]: I0130 05:07:54.365380 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:28:02.892088524 +0000 UTC Jan 30 05:07:54 crc kubenswrapper[4931]: I0130 05:07:54.809862 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:07:54 crc kubenswrapper[4931]: I0130 05:07:54.810156 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:54 crc kubenswrapper[4931]: I0130 05:07:54.812018 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:54 crc kubenswrapper[4931]: I0130 05:07:54.812090 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:54 crc kubenswrapper[4931]: I0130 05:07:54.812114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:55 crc kubenswrapper[4931]: I0130 05:07:55.366069 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:26:09.387342625 +0000 UTC Jan 30 05:07:55 crc kubenswrapper[4931]: I0130 05:07:55.394581 4931 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 05:07:55 crc kubenswrapper[4931]: I0130 05:07:55.394704 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 05:07:55 crc kubenswrapper[4931]: E0130 05:07:55.534059 4931 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.055909 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.056230 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.058258 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.058310 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.058329 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.366528 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:09:07.797859589 +0000 UTC Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.567598 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.567910 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.569654 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.569731 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:07:56 crc kubenswrapper[4931]: I0130 05:07:56.569755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:07:57 crc kubenswrapper[4931]: I0130 05:07:57.367463 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:22:21.663547629 +0000 UTC Jan 30 05:07:58 crc kubenswrapper[4931]: I0130 05:07:58.368303 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:14:05.187180104 +0000 UTC Jan 30 05:07:59 crc kubenswrapper[4931]: I0130 05:07:59.347483 4931 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 05:07:59 crc kubenswrapper[4931]: I0130 05:07:59.368895 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:21:25.527037797 +0000 UTC Jan 30 05:07:59 crc kubenswrapper[4931]: W0130 05:07:59.617444 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 05:07:59 crc kubenswrapper[4931]: I0130 05:07:59.617599 4931 trace.go:236] Trace[1759184357]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 05:07:49.615) (total time: 10002ms): Jan 30 05:07:59 crc kubenswrapper[4931]: Trace[1759184357]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:07:59.617) Jan 30 05:07:59 crc kubenswrapper[4931]: Trace[1759184357]: [10.002101739s] [10.002101739s] END Jan 30 05:07:59 crc kubenswrapper[4931]: E0130 05:07:59.617639 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 05:07:59 crc kubenswrapper[4931]: W0130 05:07:59.776450 4931 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 05:07:59 crc kubenswrapper[4931]: I0130 05:07:59.776578 4931 trace.go:236] Trace[1486905665]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 05:07:49.774) (total time: 10001ms): Jan 30 05:07:59 crc kubenswrapper[4931]: Trace[1486905665]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:07:59.776) Jan 30 05:07:59 crc kubenswrapper[4931]: Trace[1486905665]: [10.001673098s] [10.001673098s] END Jan 30 05:07:59 crc kubenswrapper[4931]: E0130 05:07:59.776609 4931 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 05:08:00 crc kubenswrapper[4931]: I0130 05:08:00.233806 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 05:08:00 crc kubenswrapper[4931]: I0130 05:08:00.233928 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 05:08:00 crc kubenswrapper[4931]: I0130 05:08:00.239221 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 05:08:00 crc kubenswrapper[4931]: I0130 05:08:00.239297 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 05:08:00 crc kubenswrapper[4931]: I0130 05:08:00.369137 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:02:04.927131676 +0000 UTC Jan 30 05:08:01 crc kubenswrapper[4931]: I0130 05:08:01.370290 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:38:47.567372124 +0000 UTC Jan 30 05:08:01 crc kubenswrapper[4931]: I0130 05:08:01.420854 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:08:01 crc kubenswrapper[4931]: I0130 05:08:01.421027 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:08:01 crc kubenswrapper[4931]: I0130 05:08:01.422240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:01 crc kubenswrapper[4931]: I0130 05:08:01.422276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:01 crc kubenswrapper[4931]: I0130 05:08:01.422286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.370705 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:00:18.305538746 +0000 UTC Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.487406 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.487691 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.489677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.489752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.489773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.495270 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.550224 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.552273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.552341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.552360 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:02 crc kubenswrapper[4931]: I0130 05:08:02.983762 4931 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 05:08:03 crc kubenswrapper[4931]: I0130 05:08:03.371229 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:01:31.453542205 +0000 UTC Jan 30 05:08:04 crc kubenswrapper[4931]: I0130 05:08:04.372180 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:30:05.610266032 +0000 UTC Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.229442 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.229867 4931 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.230325 4931 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.230643 4931 trace.go:236] Trace[806998970]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 05:07:54.235) (total time: 10995ms): Jan 30 05:08:05 crc kubenswrapper[4931]: Trace[806998970]: ---"Objects listed" error: 10995ms (05:08:05.230) Jan 30 05:08:05 crc kubenswrapper[4931]: Trace[806998970]: [10.995515157s] [10.995515157s] END Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.230665 4931 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.231120 4931 trace.go:236] Trace[653608471]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 05:07:52.528) (total time: 12702ms): Jan 30 05:08:05 crc kubenswrapper[4931]: Trace[653608471]: ---"Objects listed" error: 12702ms (05:08:05.230) Jan 30 05:08:05 crc kubenswrapper[4931]: Trace[653608471]: [12.702132969s] [12.702132969s] END Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.231143 4931 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.236609 4931 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.268996 4931 csr.go:261] certificate signing request csr-p6576 is approved, waiting to be issued Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.293626 4931 csr.go:257] certificate signing request csr-p6576 is issued Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.304400 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50822->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.304459 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50834->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.304532 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50834->192.168.126.11:17697: read: connection reset by peer" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.304531 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50822->192.168.126.11:17697: read: connection reset by peer" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.304937 4931 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.305023 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.340659 4931 apiserver.go:52] "Watching apiserver" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.346492 4931 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.346852 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.347454 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.347505 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.347575 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.347850 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.347879 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.347923 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.347947 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.347991 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.348298 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.350916 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.352511 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.352552 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.352939 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.353029 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.353115 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.353032 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.353040 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.353268 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.363711 4931 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.372492 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:43:00.723969905 +0000 UTC Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.391193 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.395655 4931 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.395730 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.404155 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.414256 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.425506 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431669 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431703 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431733 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431750 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431770 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431786 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431896 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431919 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431936 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431952 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431969 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.431985 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432004 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432022 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432043 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432058 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432078 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432099 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432114 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432130 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432145 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432161 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432177 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432192 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432227 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432245 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432263 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432281 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432307 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432323 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432339 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432356 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432370 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432390 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432409 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432448 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432465 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432481 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432496 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432511 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432526 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432541 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432559 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432572 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432589 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432617 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432634 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432650 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432668 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432686 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432703 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432720 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432737 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432753 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432771 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432788 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432805 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432821 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432839 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432856 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432874 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432893 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432912 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432928 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432944 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432978 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.432995 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433016 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433032 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433048 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433063 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433078 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433102 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433119 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433138 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433153 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433171 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433187 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433204 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433220 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433237 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433251 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433266 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433282 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433298 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433313 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433328 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433345 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433362 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433378 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433396 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433413 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433442 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433458 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433473 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433488 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433503 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433520 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433536 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433551 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433568 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433585 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433600 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433616 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433633 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433648 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433666 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433681 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433698 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433714 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433749 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433767 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433783 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433801 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433817 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433834 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433852 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433867 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433883 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433900 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433917 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433932 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433950 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433970 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.433986 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434006 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434023 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434039 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434055 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434073 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434091 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434109 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434127 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434144 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434161 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434181 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434197 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434212 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434229 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434246 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434261 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434277 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434293 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434310 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434327 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434342 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434357 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434375 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434392 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434410 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434443 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434460 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434476 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434493 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434509 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434526 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434543 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434561 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434580 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434597 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434614 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434631 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434652 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434669 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434699 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434716 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434732 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434748 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434766 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434782 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434801 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434818 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434836 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434853 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434869 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434887 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434903 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434920 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434936 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434952 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434969 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.434986 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435002 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435020 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435035 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435052 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435071 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435089 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435106 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435124 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435141 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435160 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435176 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435192 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435236 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435260 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435282 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435338 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435358 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435380 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.435408 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.436162 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.436245 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.436284 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.436312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.436339 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.436370 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.436396 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.437076 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.437467 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.437507 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.438015 4931 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.438346 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.438491 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.438776 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.438821 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.439070 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.439069 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.439233 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.439353 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.439606 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.439616 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:08:05.939581335 +0000 UTC m=+21.309491592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.439760 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.440003 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.440098 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.440440 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.440462 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.440563 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.440453 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.440941 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441127 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441086 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441311 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441371 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441675 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441704 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441869 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441875 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.441984 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.442140 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.442301 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.442334 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.443480 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.443610 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.443746 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.443765 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.443777 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.444159 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.444254 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.444774 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.444809 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.444882 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.444973 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.445296 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.446405 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.446463 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.446950 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.447156 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.447727 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.447389 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.448038 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.448243 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.448457 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.448647 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.448897 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.449025 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.449322 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.449458 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.449586 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.450265 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.450466 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.450499 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.450594 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.450658 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.450651 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.450917 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.451260 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.451334 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.451277 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.451370 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.451727 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.451779 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.451989 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.452164 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.452648 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.452938 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.453034 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.453070 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.453456 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.453491 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.453517 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.453773 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454005 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.453909 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454116 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454370 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454367 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454383 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454631 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454640 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.454764 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.455139 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.455336 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.455595 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.455893 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.455959 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.456109 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.456436 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.456587 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.456643 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.456678 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.456987 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.457377 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.457877 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.457927 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.457981 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.458513 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.458598 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.458607 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.459093 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.460128 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.461123 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.461601 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.461774 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.462031 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.462047 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.462652 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.462973 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.462995 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.463367 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.463606 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.463756 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.463805 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.463837 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.463991 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.464152 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.464919 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.465019 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.464678 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.465192 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.465413 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.465650 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.466026 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.466122 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.466324 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.466908 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.465156 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.466910 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.467117 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.467132 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.467276 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468019 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468160 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468249 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468264 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468371 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.469070 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.469146 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:05.969125776 +0000 UTC m=+21.339036033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468441 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468783 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.468739 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.469629 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.469279 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.470364 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.470121 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.470697 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.470936 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.471098 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.471703 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.471761 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.471809 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.471856 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:05.971844177 +0000 UTC m=+21.341754434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.472455 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.472527 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.472553 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.472834 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.473486 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.473992 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.472751 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.472926 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.473196 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.473581 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.473603 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.473375 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.474270 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.474562 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.474600 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.474607 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.477745 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.484156 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.485075 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.485338 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.485380 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.485433 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.486027 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:05.985990931 +0000 UTC m=+21.355901188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.486271 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.487747 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.487946 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.488053 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.488094 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.488998 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.489057 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.489273 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.490786 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.490989 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.492534 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.492678 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.496127 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.496184 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.496942 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.498278 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.498475 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.498531 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.498542 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.498803 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.498837 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.498877 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.498941 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:05.998918643 +0000 UTC m=+21.368828900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.503124 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.503371 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.506037 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.506894 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.507665 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.507740 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.508043 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.519703 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.532705 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.533535 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537381 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537459 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537524 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537541 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537558 4931 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537572 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537585 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537599 4931 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537613 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537626 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537640 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537654 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537667 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537681 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537696 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537709 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537723 4931 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537736 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537749 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537762 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537775 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537819 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537834 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537848 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537862 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537876 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537891 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537907 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537919 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537932 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537946 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537959 4931 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537971 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537985 4931 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.537999 4931 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538011 4931 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538024 4931 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538036 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538049 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538062 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538076 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538088 4931 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538101 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538114 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538128 4931 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538142 4931 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538154 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538170 4931 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538182 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538195 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538207 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538220 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538233 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538246 4931 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538259 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538272 4931 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538285 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538297 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538311 4931 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538324 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538338 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538350 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538362 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538375 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538389 4931 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538401 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538437 4931 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538450 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538464 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538476 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538490 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538503 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538516 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538529 4931 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538541 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538553 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538566 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538580 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538596 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538610 4931 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538623 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538636 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538648 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538662 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538675 4931 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538688 4931 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538702 4931 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538715 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538728 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538742 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538755 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538768 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538783 4931 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538797 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538811 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538824 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538836 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538851 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538868 4931 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538882 4931 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538894 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538907 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538920 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538934 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538948 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538963 4931 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538976 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.538990 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539002 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539015 4931 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539028 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539040 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539052 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539065 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539079 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539092 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539105 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539119 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539134 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539147 4931 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539159 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539172 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539185 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539197 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539210 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539222 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539236 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539306 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539319 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539334 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539353 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539366 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539378 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539391 4931 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539403 4931 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539432 4931 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539446 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539459 4931 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539472 4931 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539486 4931 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539498 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539510 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539524 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539537 4931 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539550 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539564 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539578 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539591 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539603 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539617 4931 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539630 4931 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539645 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539659 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539672 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539685 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539698 4931 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539710 4931 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539722 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539736 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539748 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539760 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539774 4931 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539791 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539804 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539819 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539832 4931 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539845 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539857 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539872 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539884 4931 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539896 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539909 4931 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539922 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539934 4931 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539948 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539961 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539974 4931 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.539988 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540001 4931 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540015 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540030 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540044 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540058 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540072 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540086 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540099 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540111 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540124 4931 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540137 4931 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540149 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540162 4931 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540175 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540190 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540202 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.540282 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.541130 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.541760 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.550062 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.553632 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.559450 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.560344 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.561515 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64" exitCode=255 Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.561565 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64"} Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.573928 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.591026 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.602705 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.614007 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.629792 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.640829 4931 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.640873 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.641817 4931 scope.go:117] "RemoveContainer" containerID="13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.641859 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.672040 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.674372 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.677729 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 05:08:05 crc kubenswrapper[4931]: W0130 05:08:05.682832 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-6b834b3df303d8dc92f569b168d2312b27cd03de53fee3233ef600d7c8f06593 WatchSource:0}: Error finding container 6b834b3df303d8dc92f569b168d2312b27cd03de53fee3233ef600d7c8f06593: Status 404 returned error can't find the container with id 6b834b3df303d8dc92f569b168d2312b27cd03de53fee3233ef600d7c8f06593 Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.687818 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.690195 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.702538 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: W0130 05:08:05.709232 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-ab6cac5ae8910baa78fd8ebe52c8ca97d361d058afd87642d36c42f0c7e2f80e WatchSource:0}: Error finding container ab6cac5ae8910baa78fd8ebe52c8ca97d361d058afd87642d36c42f0c7e2f80e: Status 404 returned error can't find the container with id ab6cac5ae8910baa78fd8ebe52c8ca97d361d058afd87642d36c42f0c7e2f80e Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.717558 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.735095 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.746799 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 05:08:05 crc kubenswrapper[4931]: I0130 05:08:05.944620 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:05 crc kubenswrapper[4931]: E0130 05:08:05.944817 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:08:06.944777329 +0000 UTC m=+22.314687576 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.045143 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.045188 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.045209 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.045230 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045345 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045402 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:07.045387788 +0000 UTC m=+22.415298045 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045466 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045486 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:07.045480641 +0000 UTC m=+22.415390898 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045547 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045557 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045569 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045589 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:07.045582633 +0000 UTC m=+22.415492890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045631 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045640 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045646 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.045664 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:07.045658265 +0000 UTC m=+22.415568522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.105155 4931 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.295072 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 05:03:05 +0000 UTC, rotation deadline is 2026-10-23 05:51:26.216198548 +0000 UTC Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.295162 4931 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6384h43m19.921038667s for next certificate rotation Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.373553 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:10:03.857174746 +0000 UTC Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.567836 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.570709 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7"} Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.571077 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.572517 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"100862e7f6ab10748cb8df309c37999e9df3c5b541b5fa2ae9eca60d280d80fe"} Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.576322 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91"} Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.576398 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257"} Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.576415 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ab6cac5ae8910baa78fd8ebe52c8ca97d361d058afd87642d36c42f0c7e2f80e"} Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.588235 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880"} Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.588318 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6b834b3df303d8dc92f569b168d2312b27cd03de53fee3233ef600d7c8f06593"} Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.592144 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.609013 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.630279 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.646555 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.662101 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.672394 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.711511 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.747593 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.762360 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.777583 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.796636 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.821138 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.836901 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.867015 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.885601 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.898814 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.917497 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.932516 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:06Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:06 crc kubenswrapper[4931]: I0130 05:08:06.953060 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:06 crc kubenswrapper[4931]: E0130 05:08:06.953313 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:08:08.953260887 +0000 UTC m=+24.323171144 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.054232 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.054727 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.054929 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.055114 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.054501 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.054880 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055474 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055490 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055554 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:09.05553653 +0000 UTC m=+24.425446777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.054997 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055575 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055582 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055603 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:09.055597352 +0000 UTC m=+24.425507609 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055203 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.055824 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:09.055775927 +0000 UTC m=+24.425686214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.056117 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:09.056031793 +0000 UTC m=+24.425942330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.200702 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xjfpj"] Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.201093 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.208370 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.210748 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.210846 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.228444 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.252913 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.265086 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.327192 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.358064 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c26ef8ba-80e9-4ce4-a950-9333ceda4fab-hosts-file\") pod \"node-resolver-xjfpj\" (UID: \"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\") " pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.358451 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs6rf\" (UniqueName: \"kubernetes.io/projected/c26ef8ba-80e9-4ce4-a950-9333ceda4fab-kube-api-access-zs6rf\") pod \"node-resolver-xjfpj\" (UID: \"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\") " pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.365221 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.374538 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:15:29.738660423 +0000 UTC Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.407671 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.421527 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.421527 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.421656 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.421707 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.421936 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:07 crc kubenswrapper[4931]: E0130 05:08:07.422179 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.425114 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.425633 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.426885 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.427490 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.428460 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.428924 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.429501 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.430507 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.435519 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.436117 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.440075 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.440287 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.441282 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.442609 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.443134 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.444107 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.444643 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.445613 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.446034 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.446638 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.447725 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.448174 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.448742 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.450327 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.450989 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.452263 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.452961 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.453983 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.454740 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.455839 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.456322 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.456789 4931 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.456891 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.458918 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.459389 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c26ef8ba-80e9-4ce4-a950-9333ceda4fab-hosts-file\") pod \"node-resolver-xjfpj\" (UID: \"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\") " pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.459526 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs6rf\" (UniqueName: \"kubernetes.io/projected/c26ef8ba-80e9-4ce4-a950-9333ceda4fab-kube-api-access-zs6rf\") pod \"node-resolver-xjfpj\" (UID: \"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\") " pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.459581 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c26ef8ba-80e9-4ce4-a950-9333ceda4fab-hosts-file\") pod \"node-resolver-xjfpj\" (UID: \"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\") " pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.459441 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.460565 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.461738 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.463403 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.463949 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.465251 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.465971 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.467014 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.467743 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.470233 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.470878 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.473487 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.474278 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.475467 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.476316 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.476593 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.477776 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.478382 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.478928 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.482023 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.482632 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.483622 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.490283 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs6rf\" (UniqueName: \"kubernetes.io/projected/c26ef8ba-80e9-4ce4-a950-9333ceda4fab-kube-api-access-zs6rf\") pod \"node-resolver-xjfpj\" (UID: \"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\") " pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.493312 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.514112 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xjfpj" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.593657 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xjfpj" event={"ID":"c26ef8ba-80e9-4ce4-a950-9333ceda4fab","Type":"ContainerStarted","Data":"59c98c3321fd454e9316234349f3454942f46a14ada507055eb594f5606ec0be"} Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.639320 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wfdxs"] Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.639732 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.642162 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.642345 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.642823 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.643146 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.643361 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.659919 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.690202 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.710762 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.746246 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.762508 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/189be3dc-d439-47c2-b1f2-7413fc4b5e85-mcd-auth-proxy-config\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.762577 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/189be3dc-d439-47c2-b1f2-7413fc4b5e85-proxy-tls\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.762609 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/189be3dc-d439-47c2-b1f2-7413fc4b5e85-rootfs\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.762626 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl6mq\" (UniqueName: \"kubernetes.io/projected/189be3dc-d439-47c2-b1f2-7413fc4b5e85-kube-api-access-xl6mq\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.773529 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.790235 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.810176 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.827477 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.842562 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.854321 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:07Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.863358 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl6mq\" (UniqueName: \"kubernetes.io/projected/189be3dc-d439-47c2-b1f2-7413fc4b5e85-kube-api-access-xl6mq\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.863462 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/189be3dc-d439-47c2-b1f2-7413fc4b5e85-mcd-auth-proxy-config\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.863524 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/189be3dc-d439-47c2-b1f2-7413fc4b5e85-proxy-tls\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.863601 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/189be3dc-d439-47c2-b1f2-7413fc4b5e85-rootfs\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.863686 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/189be3dc-d439-47c2-b1f2-7413fc4b5e85-rootfs\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.864483 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/189be3dc-d439-47c2-b1f2-7413fc4b5e85-mcd-auth-proxy-config\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.868047 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/189be3dc-d439-47c2-b1f2-7413fc4b5e85-proxy-tls\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.879778 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl6mq\" (UniqueName: \"kubernetes.io/projected/189be3dc-d439-47c2-b1f2-7413fc4b5e85-kube-api-access-xl6mq\") pod \"machine-config-daemon-wfdxs\" (UID: \"189be3dc-d439-47c2-b1f2-7413fc4b5e85\") " pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:07 crc kubenswrapper[4931]: I0130 05:08:07.951828 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.071962 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cdsw5"] Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.074166 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.079869 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.081824 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.081920 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.081961 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.081867 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.082604 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bshbf"] Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.083514 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lm7vv"] Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.084259 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.083701 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.086593 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.087209 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.087444 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.088157 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.088220 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.093739 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.094101 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.095392 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.096185 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.104586 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.121165 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.132851 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.146128 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.164154 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166453 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwbjv\" (UniqueName: \"kubernetes.io/projected/556d9fc5-72b4-4134-8074-1e9d07012763-kube-api-access-rwbjv\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166496 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-os-release\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166528 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-node-log\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166548 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkfl\" (UniqueName: \"kubernetes.io/projected/b17d6adf-e35b-4bf8-9ab2-e6720e595835-kube-api-access-5kkfl\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166563 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-ovn\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166579 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-bin\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166600 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556d9fc5-72b4-4134-8074-1e9d07012763-ovn-node-metrics-cert\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166625 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-log-socket\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166642 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-etc-kubernetes\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166662 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-netd\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166678 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-socket-dir-parent\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166697 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-conf-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166713 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-var-lib-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166732 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-daemon-config\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166747 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-multus-certs\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166764 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-cnibin\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166779 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-systemd\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166792 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-netns\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166807 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-systemd-units\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166823 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-cni-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166867 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-cni-bin\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166885 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-slash\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166899 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-k8s-cni-cncf-io\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166915 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/06cb3786-294c-45f0-b414-66d84f8d5786-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166953 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166968 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-script-lib\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.166986 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-system-cni-dir\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167007 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-ovn-kubernetes\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167141 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfvtg\" (UniqueName: \"kubernetes.io/projected/06cb3786-294c-45f0-b414-66d84f8d5786-kube-api-access-gfvtg\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167190 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b17d6adf-e35b-4bf8-9ab2-e6720e595835-cni-binary-copy\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167229 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-os-release\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167270 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167287 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-cni-multus\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-kubelet\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167348 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-kubelet\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167367 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167390 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-config\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167407 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-hostroot\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167439 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/06cb3786-294c-45f0-b414-66d84f8d5786-cni-binary-copy\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167468 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-netns\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167483 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-etc-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167498 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-env-overrides\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167513 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-system-cni-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.167546 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-cnibin\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.179669 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.195177 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.206594 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.220592 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.235209 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.251886 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268330 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-system-cni-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268380 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-cnibin\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268406 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/06cb3786-294c-45f0-b414-66d84f8d5786-cni-binary-copy\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268442 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-netns\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268462 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-etc-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268480 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-env-overrides\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268502 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-node-log\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268524 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwbjv\" (UniqueName: \"kubernetes.io/projected/556d9fc5-72b4-4134-8074-1e9d07012763-kube-api-access-rwbjv\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268546 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-os-release\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268601 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-cnibin\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268640 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-netns\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268691 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-node-log\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268651 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-system-cni-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268615 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kkfl\" (UniqueName: \"kubernetes.io/projected/b17d6adf-e35b-4bf8-9ab2-e6720e595835-kube-api-access-5kkfl\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268768 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-etc-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268836 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-ovn\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268871 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-bin\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268911 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556d9fc5-72b4-4134-8074-1e9d07012763-ovn-node-metrics-cert\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.268947 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-etc-kubernetes\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269002 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-log-socket\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269034 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-netd\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269060 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-socket-dir-parent\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269087 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-conf-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269118 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-var-lib-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269145 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-daemon-config\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269174 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-multus-certs\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269203 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-netns\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269233 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-os-release\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269240 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-cnibin\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269280 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-cnibin\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269292 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-systemd\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269316 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-systemd-units\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269325 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-etc-kubernetes\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269334 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-cni-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269358 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-cni-bin\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269366 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-log-socket\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269378 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-k8s-cni-cncf-io\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269403 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-slash\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269443 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/06cb3786-294c-45f0-b414-66d84f8d5786-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269464 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269411 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-netd\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269487 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-script-lib\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269512 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-ovn-kubernetes\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269532 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-system-cni-dir\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269553 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b17d6adf-e35b-4bf8-9ab2-e6720e595835-cni-binary-copy\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269569 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-socket-dir-parent\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269584 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfvtg\" (UniqueName: \"kubernetes.io/projected/06cb3786-294c-45f0-b414-66d84f8d5786-kube-api-access-gfvtg\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269621 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-conf-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269637 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-cni-multus\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269654 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-env-overrides\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269652 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/06cb3786-294c-45f0-b414-66d84f8d5786-cni-binary-copy\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269708 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-kubelet\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269707 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-slash\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269756 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-multus-certs\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269665 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-var-lib-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269663 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-kubelet\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269835 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-ovn-kubernetes\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269887 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-cni-multus\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269948 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-os-release\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269973 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.269999 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-hostroot\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270051 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-ovn\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270084 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-bin\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270145 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-cni-dir\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270196 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-system-cni-dir\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-systemd\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270274 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-os-release\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270382 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/06cb3786-294c-45f0-b414-66d84f8d5786-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270436 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-systemd-units\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270458 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-var-lib-cni-bin\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270477 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-openvswitch\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270518 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-hostroot\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270543 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-netns\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270570 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b17d6adf-e35b-4bf8-9ab2-e6720e595835-host-run-k8s-cni-cncf-io\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270604 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-kubelet\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270639 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270620 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-script-lib\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270662 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-config\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270723 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-kubelet\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270728 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270782 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b17d6adf-e35b-4bf8-9ab2-e6720e595835-multus-daemon-config\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.270907 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/06cb3786-294c-45f0-b414-66d84f8d5786-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.271018 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b17d6adf-e35b-4bf8-9ab2-e6720e595835-cni-binary-copy\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.271188 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-config\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.274833 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556d9fc5-72b4-4134-8074-1e9d07012763-ovn-node-metrics-cert\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.279258 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.290096 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kkfl\" (UniqueName: \"kubernetes.io/projected/b17d6adf-e35b-4bf8-9ab2-e6720e595835-kube-api-access-5kkfl\") pod \"multus-lm7vv\" (UID: \"b17d6adf-e35b-4bf8-9ab2-e6720e595835\") " pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.290877 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwbjv\" (UniqueName: \"kubernetes.io/projected/556d9fc5-72b4-4134-8074-1e9d07012763-kube-api-access-rwbjv\") pod \"ovnkube-node-bshbf\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.291867 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfvtg\" (UniqueName: \"kubernetes.io/projected/06cb3786-294c-45f0-b414-66d84f8d5786-kube-api-access-gfvtg\") pod \"multus-additional-cni-plugins-cdsw5\" (UID: \"06cb3786-294c-45f0-b414-66d84f8d5786\") " pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.294150 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.308229 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.321196 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.339828 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.363667 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.374729 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:07:38.807879612 +0000 UTC Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.388868 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.391992 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: W0130 05:08:08.401797 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06cb3786_294c_45f0_b414_66d84f8d5786.slice/crio-c11516796e3f42abaa2b9fe28fc2d0fca97a48759d8fc7805f4a30e93d339fc8 WatchSource:0}: Error finding container c11516796e3f42abaa2b9fe28fc2d0fca97a48759d8fc7805f4a30e93d339fc8: Status 404 returned error can't find the container with id c11516796e3f42abaa2b9fe28fc2d0fca97a48759d8fc7805f4a30e93d339fc8 Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.402382 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.411945 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.414134 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lm7vv" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.425785 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.441877 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.456292 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.474144 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.499975 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.600168 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.600226 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"ae003bf2c8441af0b322798040d7d0e26c38e678b0b4800e8ee8c379eec9e42a"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.606306 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.606381 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.606403 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"439d57bdeba26e03a9c77905edbb1cc2c5562b619239519cce547d019fdd2647"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.609489 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xjfpj" event={"ID":"c26ef8ba-80e9-4ce4-a950-9333ceda4fab","Type":"ContainerStarted","Data":"8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.610850 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerStarted","Data":"c11516796e3f42abaa2b9fe28fc2d0fca97a48759d8fc7805f4a30e93d339fc8"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.613163 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerStarted","Data":"71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.613204 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerStarted","Data":"83792258b3ac60be35c11a507467e9e1fc774a91e583e6daa617a47d7f261e8d"} Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.619638 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.637046 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.652081 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.671228 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.700004 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.727029 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.740129 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.754614 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.767395 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.786115 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.839816 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.858450 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.874547 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.900348 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.924160 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.948066 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.963928 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.976476 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.981242 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:08 crc kubenswrapper[4931]: E0130 05:08:08.981550 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:08:12.981501129 +0000 UTC m=+28.351411426 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:08:08 crc kubenswrapper[4931]: I0130 05:08:08.993220 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:08Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.016242 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.032042 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.044578 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.055971 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.067876 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.081222 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.082589 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.082655 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.082700 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082759 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082788 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082810 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082826 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.082766 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082891 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:13.082862488 +0000 UTC m=+28.452772965 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082912 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082923 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082943 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.083031 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.082917 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:13.082907099 +0000 UTC m=+28.452817616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.083097 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:13.083072414 +0000 UTC m=+28.452982661 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.083110 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:13.083103105 +0000 UTC m=+28.453013362 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.100194 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.375716 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:35:00.155291584 +0000 UTC Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.421506 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.421661 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.421745 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.422190 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.421930 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:09 crc kubenswrapper[4931]: E0130 05:08:09.422354 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.620256 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932"} Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.624342 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929" exitCode=0 Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.624463 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.624517 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.624533 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.624542 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.624551 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.626884 4931 generic.go:334] "Generic (PLEG): container finished" podID="06cb3786-294c-45f0-b414-66d84f8d5786" containerID="7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe" exitCode=0 Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.626974 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerDied","Data":"7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe"} Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.637517 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.656124 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.675074 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.692082 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.718238 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.735084 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.751693 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.767396 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.790469 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.814477 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.832817 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.849690 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.864328 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.876961 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.896084 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.915261 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.928484 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.950050 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:09 crc kubenswrapper[4931]: I0130 05:08:09.977947 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:09Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.056587 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.075759 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.094597 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.115041 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.130566 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.148758 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.168604 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.377412 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:45:19.04930344 +0000 UTC Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.444308 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vtnpc"] Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.445052 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.448854 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.449178 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.449651 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.449958 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.473903 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.496605 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.497001 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99cb8b56-06fb-4497-82f8-d2ba1887be6a-host\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.497077 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/99cb8b56-06fb-4497-82f8-d2ba1887be6a-serviceca\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.497152 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qqgf\" (UniqueName: \"kubernetes.io/projected/99cb8b56-06fb-4497-82f8-d2ba1887be6a-kube-api-access-2qqgf\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.516320 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.542790 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.567825 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.584518 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.598086 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qqgf\" (UniqueName: \"kubernetes.io/projected/99cb8b56-06fb-4497-82f8-d2ba1887be6a-kube-api-access-2qqgf\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.598196 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99cb8b56-06fb-4497-82f8-d2ba1887be6a-host\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.598213 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.598264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/99cb8b56-06fb-4497-82f8-d2ba1887be6a-serviceca\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.598463 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99cb8b56-06fb-4497-82f8-d2ba1887be6a-host\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.600377 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/99cb8b56-06fb-4497-82f8-d2ba1887be6a-serviceca\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.622859 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.634236 4931 generic.go:334] "Generic (PLEG): container finished" podID="06cb3786-294c-45f0-b414-66d84f8d5786" containerID="79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3" exitCode=0 Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.634338 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerDied","Data":"79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3"} Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.634747 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qqgf\" (UniqueName: \"kubernetes.io/projected/99cb8b56-06fb-4497-82f8-d2ba1887be6a-kube-api-access-2qqgf\") pod \"node-ca-vtnpc\" (UID: \"99cb8b56-06fb-4497-82f8-d2ba1887be6a\") " pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.641794 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.642749 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.659031 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.681099 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.705001 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.727906 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.749284 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.761927 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vtnpc" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.774454 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.790780 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.817159 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.834328 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.852755 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.882362 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.923208 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.940755 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.958671 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.972810 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:10 crc kubenswrapper[4931]: I0130 05:08:10.992749 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:10Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.005389 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.022489 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.036302 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.055197 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.378605 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:48:58.010254189 +0000 UTC Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.421248 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.421254 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.421387 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.421484 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.421541 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.421651 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.630586 4931 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.649737 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.649786 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.649800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.649988 4931 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.654873 4931 generic.go:334] "Generic (PLEG): container finished" podID="06cb3786-294c-45f0-b414-66d84f8d5786" containerID="f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f" exitCode=0 Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.654956 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerDied","Data":"f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.656348 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vtnpc" event={"ID":"99cb8b56-06fb-4497-82f8-d2ba1887be6a","Type":"ContainerStarted","Data":"fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.656413 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vtnpc" event={"ID":"99cb8b56-06fb-4497-82f8-d2ba1887be6a","Type":"ContainerStarted","Data":"bba9207d024c7e38d49589dca195a930a1fcedd09392415e9254f73018a143e0"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.661105 4931 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.661490 4931 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.663362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.663394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.663407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.663442 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.663458 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.671504 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.687172 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.695894 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.704667 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.704720 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.704736 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.704758 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.704772 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.717493 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.720482 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.724638 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.724686 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.724698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.724721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.724734 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.735273 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.738643 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.743027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.743075 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.743086 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.743111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.743128 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.759339 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.762828 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.764182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.764228 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.764238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.764259 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.764269 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.777366 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: E0130 05:08:11.777495 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.779669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.779703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.779712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.779728 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.779740 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.787621 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.801539 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.816781 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.828864 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.841205 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.854351 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.868791 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.882754 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.882799 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.882812 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.882831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.882846 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.888101 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.903367 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.925823 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.944957 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.967664 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.983797 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.986599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.986664 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.986677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.986704 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:11 crc kubenswrapper[4931]: I0130 05:08:11.986722 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:11Z","lastTransitionTime":"2026-01-30T05:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.000197 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:11Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.018549 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.036864 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.064743 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.080807 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.089636 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.089671 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.089681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.089698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.089710 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.096665 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.111291 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.128453 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.148160 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.168801 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.192439 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.192485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.192503 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.192521 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.192533 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.295612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.295676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.295694 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.295719 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.295743 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.379013 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:48:54.899378078 +0000 UTC Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.399755 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.399805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.399854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.399876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.399906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.399929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.406594 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.416935 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.437993 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.463643 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.485114 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.507962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.508056 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.508077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.508106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.508128 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.509935 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.529293 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.551751 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.567534 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.586274 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.600819 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.611223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.611285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.611304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.611334 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.611356 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.615133 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.632052 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.651802 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.665120 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.667879 4931 generic.go:334] "Generic (PLEG): container finished" podID="06cb3786-294c-45f0-b414-66d84f8d5786" containerID="668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630" exitCode=0 Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.669323 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerDied","Data":"668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.674011 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: E0130 05:08:12.678334 4931 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.695263 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.708380 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.714506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.714571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.714592 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.714620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.714640 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.722918 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.741607 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.761493 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.792630 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.818297 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.818363 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.818382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.818409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.818471 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.838223 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.879453 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.916371 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.922397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.922450 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.922462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.922480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.922494 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:12Z","lastTransitionTime":"2026-01-30T05:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.953965 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:12 crc kubenswrapper[4931]: I0130 05:08:12.993805 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:12Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.025931 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.026019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.026051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.026064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.026087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.026102 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.026248 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:08:21.026203204 +0000 UTC m=+36.396113621 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.038289 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.079268 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.115393 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.127598 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.127669 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.127718 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.127750 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.127919 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.127941 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.127956 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128015 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:21.127996485 +0000 UTC m=+36.497906742 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128456 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128494 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:21.128484738 +0000 UTC m=+36.498394995 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128564 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128635 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:21.128625971 +0000 UTC m=+36.498536238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128700 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128713 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128723 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.128750 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:21.128741695 +0000 UTC m=+36.498651952 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.130219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.130298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.130320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.130350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.130368 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.156349 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.199386 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.233870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.233950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.233978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.234014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.234037 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.337881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.337938 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.337952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.337975 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.337992 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.381112 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:03:46.919517525 +0000 UTC Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.421655 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.421728 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.421672 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.421911 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.422042 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:13 crc kubenswrapper[4931]: E0130 05:08:13.422201 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.441583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.441653 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.441674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.441726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.441752 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.544913 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.544977 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.544996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.545022 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.545040 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.648162 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.648267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.648287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.648318 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.648338 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.676329 4931 generic.go:334] "Generic (PLEG): container finished" podID="06cb3786-294c-45f0-b414-66d84f8d5786" containerID="dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af" exitCode=0 Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.677527 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerDied","Data":"dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.711201 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.735907 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.760381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.760529 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.760554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.760586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.760618 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.764257 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.782159 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.798447 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.819580 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.851216 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.863877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.863949 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.863968 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.863997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.864015 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.885883 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.905806 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.925466 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.941254 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.963375 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.966918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.966960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.966971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.966990 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.967003 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:13Z","lastTransitionTime":"2026-01-30T05:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.977842 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:13 crc kubenswrapper[4931]: I0130 05:08:13.995045 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.012764 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.072011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.072065 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.072083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.072108 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.072127 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.175733 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.175783 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.175802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.175829 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.175848 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.282788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.283371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.283394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.283453 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.283475 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.382193 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:51:42.196376228 +0000 UTC Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.386998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.387058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.387072 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.387095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.387110 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.490715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.490763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.490774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.490792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.490805 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.593409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.593477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.593489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.593506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.593516 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.684131 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.685015 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.685074 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.696114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.696138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.696146 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.696170 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.696180 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.697055 4931 generic.go:334] "Generic (PLEG): container finished" podID="06cb3786-294c-45f0-b414-66d84f8d5786" containerID="42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead" exitCode=0 Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.697089 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerDied","Data":"42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.715329 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.720174 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.737415 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.760391 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.782379 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.799965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.800027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.800047 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.800075 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.800148 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.802972 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.829383 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.858661 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.876020 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.893142 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.911121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.911173 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.911187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.911210 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.911224 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:14Z","lastTransitionTime":"2026-01-30T05:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.912140 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.930940 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.944102 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.957862 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.970657 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:14 crc kubenswrapper[4931]: I0130 05:08:14.987878 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.001398 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:14Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.015316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.015492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.015508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.015533 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.015549 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.020239 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.039605 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.053193 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.075971 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.088782 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.120501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.120563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.120576 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.120598 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.120613 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.142479 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.179708 4931 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.180913 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-bshbf/status\": read tcp 38.102.83.179:47136->38.102.83.179:6443: use of closed network connection" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.223460 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.223507 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.223521 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.223542 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.223555 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.228646 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.249457 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.261842 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.272138 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.288933 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.298439 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.312611 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.326629 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.326714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.326757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.326769 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.326788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.326800 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.382693 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:25:10.184092603 +0000 UTC Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.421777 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.421843 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:15 crc kubenswrapper[4931]: E0130 05:08:15.421978 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:15 crc kubenswrapper[4931]: E0130 05:08:15.422192 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.422285 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:15 crc kubenswrapper[4931]: E0130 05:08:15.422375 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.428909 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.428962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.428981 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.429008 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.429029 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.445021 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.467409 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.485882 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.507162 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.529764 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.531755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.531831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.531852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.531897 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.531916 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.554974 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.568380 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.593175 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.625771 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.635010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.635081 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.635107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.635142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.635167 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.660051 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.681853 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.702360 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.716155 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" event={"ID":"06cb3786-294c-45f0-b414-66d84f8d5786","Type":"ContainerStarted","Data":"06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.716256 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.721081 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.737355 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.737488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.737549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.737572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.737599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.737620 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.751946 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.780757 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.798869 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.812562 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.830467 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.841276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.841330 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.841344 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.841368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.841389 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.844979 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.863231 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.881315 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.923151 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.944484 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.944542 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.944562 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.944590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.944610 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:15Z","lastTransitionTime":"2026-01-30T05:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:15 crc kubenswrapper[4931]: I0130 05:08:15.967059 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.000918 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.038673 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:16Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.047608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.047665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.047679 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.047702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.047717 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.075913 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:16Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.119261 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:16Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.151908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.151980 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.151999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.152033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.152054 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.162613 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:16Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.209323 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:16Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.255880 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.256339 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.256596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.256760 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.257111 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.363169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.363229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.363245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.363266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.363281 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.383101 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:05:46.415415386 +0000 UTC Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.467163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.467244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.467268 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.467367 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.467443 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.570684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.570759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.570782 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.570814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.570838 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.675245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.675306 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.675318 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.675343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.675356 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.719890 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.779089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.779168 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.779199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.779232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.779255 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.883027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.883080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.883099 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.883123 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.883142 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.985923 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.985960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.985969 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.985985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:16 crc kubenswrapper[4931]: I0130 05:08:16.985997 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:16Z","lastTransitionTime":"2026-01-30T05:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.088675 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.088726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.088735 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.088752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.088766 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.130713 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.153547 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.173366 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.190764 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.190804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.190815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.190834 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.190846 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.191144 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.214896 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.237601 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.262608 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.281320 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.294083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.294145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.294160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.294183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.294200 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.299245 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.315223 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.330404 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.348805 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.369560 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.384279 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:21:30.021768767 +0000 UTC Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.397257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.397344 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.397370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.397398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.397415 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.397325 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.417315 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.421000 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.421108 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.421022 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:17 crc kubenswrapper[4931]: E0130 05:08:17.421201 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:17 crc kubenswrapper[4931]: E0130 05:08:17.421305 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:17 crc kubenswrapper[4931]: E0130 05:08:17.421453 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.440013 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.500856 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.501644 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.501668 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.501696 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.501714 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.604800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.604909 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.604927 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.604955 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.604974 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.709724 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.709794 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.709815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.709845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.709868 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.731141 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/0.log" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.735606 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4" exitCode=1 Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.735841 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.738317 4931 scope.go:117] "RemoveContainer" containerID="87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.758185 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.777174 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.794356 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.814120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.814190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.814215 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.814245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.814265 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.823327 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.845001 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.866114 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.886645 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.914877 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.917976 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.918033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.918338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.918369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.918381 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:17Z","lastTransitionTime":"2026-01-30T05:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.943071 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610022 6226 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610537 6226 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 05:08:17.611197 6226 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:17.611235 6226 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:17.611280 6226 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:17.611321 6226 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:17.611337 6226 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:17.611361 6226 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:17.611362 6226 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:17.611384 6226 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:17.611466 6226 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:17.611468 6226 factory.go:656] Stopping watch factory\\\\nI0130 05:08:17.611490 6226 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.962236 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.982949 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:17 crc kubenswrapper[4931]: I0130 05:08:17.998159 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:17Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.014482 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.026002 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.026330 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.026369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.026390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.026415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.026449 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.037513 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.129057 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.129128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.129141 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.129166 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.129178 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.232878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.232940 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.232954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.232979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.232992 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.335189 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.335266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.335278 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.335298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.335309 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.384611 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:46:32.857039471 +0000 UTC Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.437536 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.437967 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.437984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.438006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.438019 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.541108 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.541158 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.541168 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.541186 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.541197 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.644335 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.644393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.644407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.644429 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.644455 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.741174 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/0.log" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.743983 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.744166 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.746126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.746179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.746194 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.746216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.746232 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.802033 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.817555 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.832150 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.849389 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.849507 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.849531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.849565 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.849584 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.849799 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.863917 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.876529 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.898471 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.934950 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610022 6226 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610537 6226 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 05:08:17.611197 6226 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:17.611235 6226 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:17.611280 6226 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:17.611321 6226 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:17.611337 6226 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:17.611361 6226 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:17.611362 6226 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:17.611384 6226 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:17.611466 6226 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:17.611468 6226 factory.go:656] Stopping watch factory\\\\nI0130 05:08:17.611490 6226 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.958877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.958955 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.958975 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.959004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.959023 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:18Z","lastTransitionTime":"2026-01-30T05:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.972343 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:18 crc kubenswrapper[4931]: I0130 05:08:18.994015 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:18Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.013639 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.033480 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.051406 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.062058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.062132 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.062153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.062185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.062205 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.070227 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.086508 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.165923 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.165999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.166023 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.166058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.166085 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.269335 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.269412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.269434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.269494 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.269517 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.374274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.374360 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.374385 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.374422 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.374454 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.385155 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:21:35.37487329 +0000 UTC Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.422216 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.422287 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.422237 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:19 crc kubenswrapper[4931]: E0130 05:08:19.422479 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:19 crc kubenswrapper[4931]: E0130 05:08:19.422608 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:19 crc kubenswrapper[4931]: E0130 05:08:19.422845 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.478906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.478978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.479000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.479033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.479055 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.582413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.582551 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.582581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.582619 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.582662 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.686418 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.686543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.686564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.686597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.686619 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.751292 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/1.log" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.752418 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/0.log" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.757630 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431" exitCode=1 Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.757718 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.757871 4931 scope.go:117] "RemoveContainer" containerID="87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.759540 4931 scope.go:117] "RemoveContainer" containerID="dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431" Jan 30 05:08:19 crc kubenswrapper[4931]: E0130 05:08:19.759915 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.781507 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.790064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.790159 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.790185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.790215 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.790234 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.800792 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.816852 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.836196 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.856875 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.882013 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.894055 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.894116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.894129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.894154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.894169 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.905231 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.926952 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.950869 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.988376 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:19Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.997288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.997371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.997398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.997479 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:19 crc kubenswrapper[4931]: I0130 05:08:19.997511 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:19Z","lastTransitionTime":"2026-01-30T05:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.015236 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:20Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.043816 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:20Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.067858 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:20Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.090294 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:20Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.105647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.105706 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.105725 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.105752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.105770 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.131797 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610022 6226 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610537 6226 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 05:08:17.611197 6226 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:17.611235 6226 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:17.611280 6226 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:17.611321 6226 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:17.611337 6226 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:17.611361 6226 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:17.611362 6226 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:17.611384 6226 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:17.611466 6226 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:17.611468 6226 factory.go:656] Stopping watch factory\\\\nI0130 05:08:17.611490 6226 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:20Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.209531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.209595 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.209617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.209648 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.209668 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.313642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.314126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.314310 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.314493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.314639 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.386388 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:11:43.939044063 +0000 UTC Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.418303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.418652 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.418863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.419007 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.419156 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.523394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.523488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.523509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.523540 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.523563 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.627674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.627753 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.627774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.627804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.627829 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.731393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.731497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.731522 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.731552 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.731571 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.765933 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/1.log" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.834710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.834772 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.834792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.834818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.834836 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.938471 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.938538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.938552 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.938574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.938596 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:20Z","lastTransitionTime":"2026-01-30T05:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.954211 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw"] Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.954885 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.958780 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.959426 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 05:08:20 crc kubenswrapper[4931]: I0130 05:08:20.993175 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:20Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.017273 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.032337 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.032598 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:08:37.032556449 +0000 UTC m=+52.402466716 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.032692 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dss26\" (UniqueName: \"kubernetes.io/projected/f069d6db-7396-4c40-9ea9-4cc66c499cb2-kube-api-access-dss26\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.032749 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f069d6db-7396-4c40-9ea9-4cc66c499cb2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.032824 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f069d6db-7396-4c40-9ea9-4cc66c499cb2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.032932 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f069d6db-7396-4c40-9ea9-4cc66c499cb2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.036751 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.041856 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.041908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.041924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.041947 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.041959 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.067069 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.097765 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134561 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134680 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f069d6db-7396-4c40-9ea9-4cc66c499cb2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134725 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f069d6db-7396-4c40-9ea9-4cc66c499cb2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134793 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134844 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.134858 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.134902 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134908 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134951 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dss26\" (UniqueName: \"kubernetes.io/projected/f069d6db-7396-4c40-9ea9-4cc66c499cb2-kube-api-access-dss26\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.134989 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f069d6db-7396-4c40-9ea9-4cc66c499cb2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.134918 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135720 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:37.135685955 +0000 UTC m=+52.505596252 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135513 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135794 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135890 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:37.13586578 +0000 UTC m=+52.505776077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135894 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135593 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135947 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.135965 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:37.135951632 +0000 UTC m=+52.505861899 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.136081 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:37.136044274 +0000 UTC m=+52.505954571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.136428 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f069d6db-7396-4c40-9ea9-4cc66c499cb2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.137172 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f069d6db-7396-4c40-9ea9-4cc66c499cb2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.137993 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610022 6226 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610537 6226 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 05:08:17.611197 6226 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:17.611235 6226 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:17.611280 6226 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:17.611321 6226 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:17.611337 6226 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:17.611361 6226 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:17.611362 6226 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:17.611384 6226 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:17.611466 6226 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:17.611468 6226 factory.go:656] Stopping watch factory\\\\nI0130 05:08:17.611490 6226 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.146523 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.146583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.146599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.146618 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.146630 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.147385 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f069d6db-7396-4c40-9ea9-4cc66c499cb2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.160360 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.175725 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dss26\" (UniqueName: \"kubernetes.io/projected/f069d6db-7396-4c40-9ea9-4cc66c499cb2-kube-api-access-dss26\") pod \"ovnkube-control-plane-749d76644c-rmtbw\" (UID: \"f069d6db-7396-4c40-9ea9-4cc66c499cb2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.179728 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.197411 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.211820 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.226420 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.240515 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.250513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.250609 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.250633 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.250670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.250696 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.257339 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.279832 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.281801 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.302490 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: W0130 05:08:21.303182 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf069d6db_7396_4c40_9ea9_4cc66c499cb2.slice/crio-c1c7d22a7d6fe8702f3429f3b1131d5621d02da180c142f6726c1722da402ce4 WatchSource:0}: Error finding container c1c7d22a7d6fe8702f3429f3b1131d5621d02da180c142f6726c1722da402ce4: Status 404 returned error can't find the container with id c1c7d22a7d6fe8702f3429f3b1131d5621d02da180c142f6726c1722da402ce4 Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.323683 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.354847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.354920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.354942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.354971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.354993 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.387281 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:28:16.861045803 +0000 UTC Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.422001 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.422222 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.422533 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.422684 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.422914 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:21 crc kubenswrapper[4931]: E0130 05:08:21.423037 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.458403 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.458470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.458485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.458507 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.458521 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.562181 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.562258 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.562276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.562307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.562329 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.665242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.665307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.665325 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.665454 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.665482 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.769752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.769805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.769814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.769835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.769851 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.777490 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" event={"ID":"f069d6db-7396-4c40-9ea9-4cc66c499cb2","Type":"ContainerStarted","Data":"61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.777542 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" event={"ID":"f069d6db-7396-4c40-9ea9-4cc66c499cb2","Type":"ContainerStarted","Data":"6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.777554 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" event={"ID":"f069d6db-7396-4c40-9ea9-4cc66c499cb2","Type":"ContainerStarted","Data":"c1c7d22a7d6fe8702f3429f3b1131d5621d02da180c142f6726c1722da402ce4"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.795291 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.811017 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.828043 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.848437 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.868603 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.872991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.873060 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.873076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.873101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.873116 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.889084 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.920017 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.949965 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610022 6226 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610537 6226 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 05:08:17.611197 6226 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:17.611235 6226 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:17.611280 6226 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:17.611321 6226 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:17.611337 6226 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:17.611361 6226 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:17.611362 6226 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:17.611384 6226 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:17.611466 6226 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:17.611468 6226 factory.go:656] Stopping watch factory\\\\nI0130 05:08:17.611490 6226 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.974108 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.975931 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.975991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.976004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.976027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.976043 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:21Z","lastTransitionTime":"2026-01-30T05:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:21 crc kubenswrapper[4931]: I0130 05:08:21.990482 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:21Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.008216 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.019928 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.033793 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.046223 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.062823 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.079383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.079498 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.079525 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.079557 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.079583 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.080938 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.099962 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gt48b"] Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.100621 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.100697 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.123817 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.136828 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.154854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.154892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.154901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.154920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.154932 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.169151 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.185994 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.187314 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.194026 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.194238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.194380 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.194589 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.194754 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.209573 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.218252 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.224004 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.224039 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.224050 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.224071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.224086 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.228596 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.241992 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.243882 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.247180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.247267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.247288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.247348 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.247369 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.250456 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dkp\" (UniqueName: \"kubernetes.io/projected/1421762e-4873-46cb-8c43-b8faa0cbca62-kube-api-access-b5dkp\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.250607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.269313 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.270648 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.277939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.278046 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.278108 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.278142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.278207 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.296172 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.296481 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.299128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.299186 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.299248 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.299282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.299305 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.299745 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610022 6226 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610537 6226 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 05:08:17.611197 6226 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:17.611235 6226 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:17.611280 6226 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:17.611321 6226 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:17.611337 6226 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:17.611361 6226 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:17.611362 6226 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:17.611384 6226 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:17.611466 6226 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:17.611468 6226 factory.go:656] Stopping watch factory\\\\nI0130 05:08:17.611490 6226 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.328357 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.346652 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.351929 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5dkp\" (UniqueName: \"kubernetes.io/projected/1421762e-4873-46cb-8c43-b8faa0cbca62-kube-api-access-b5dkp\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.352083 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.352302 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.352452 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:22.852384967 +0000 UTC m=+38.222295234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.359368 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.373947 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.378495 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5dkp\" (UniqueName: \"kubernetes.io/projected/1421762e-4873-46cb-8c43-b8faa0cbca62-kube-api-access-b5dkp\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.388038 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:56:29.627971081 +0000 UTC Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.391760 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.401527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.401661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.401741 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.401849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.401929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.410444 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.426240 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.446260 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:22Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.505190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.505254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.505277 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.505309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.505332 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.610679 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.611194 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.611220 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.611258 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.611283 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.714888 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.714961 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.714986 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.715018 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.715045 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.818726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.818819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.818853 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.818884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.818906 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.859963 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.860163 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:22 crc kubenswrapper[4931]: E0130 05:08:22.860256 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:23.86023031 +0000 UTC m=+39.230140607 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.922873 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.922946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.922965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.922994 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:22 crc kubenswrapper[4931]: I0130 05:08:22.923013 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:22Z","lastTransitionTime":"2026-01-30T05:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.027038 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.027122 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.027145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.027178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.027203 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.131120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.131209 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.131229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.131264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.131284 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.235361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.235463 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.235490 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.235524 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.235544 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.339071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.339496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.339670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.339815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.339942 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.388978 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:56:51.306995113 +0000 UTC Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.422045 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.422137 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.422050 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:23 crc kubenswrapper[4931]: E0130 05:08:23.422277 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.422332 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:23 crc kubenswrapper[4931]: E0130 05:08:23.422552 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:23 crc kubenswrapper[4931]: E0130 05:08:23.422770 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:23 crc kubenswrapper[4931]: E0130 05:08:23.422840 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.447230 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.447321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.447352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.447388 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.447416 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.551538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.551610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.551632 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.551665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.551690 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.654996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.655078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.655103 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.655140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.655164 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.759185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.759281 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.759302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.759334 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.759356 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.862637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.862713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.862732 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.862762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.862781 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.872616 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:23 crc kubenswrapper[4931]: E0130 05:08:23.872863 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:23 crc kubenswrapper[4931]: E0130 05:08:23.872976 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:25.8729457 +0000 UTC m=+41.242855987 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.966476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.966549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.966573 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.966606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:23 crc kubenswrapper[4931]: I0130 05:08:23.966629 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:23Z","lastTransitionTime":"2026-01-30T05:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.070394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.070516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.070536 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.070567 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.070598 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.174092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.174180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.174201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.174232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.174256 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.278412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.278560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.278587 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.278621 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.278642 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.381843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.381965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.381991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.382022 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.382046 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.390417 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:33:43.845410119 +0000 UTC Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.485802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.485873 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.485894 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.485924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.485953 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.589687 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.589749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.589772 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.589808 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.589831 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.693550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.693634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.693654 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.693685 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.693711 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.797252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.797362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.797381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.797409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.797458 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.901415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.901517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.901539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.901566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:24 crc kubenswrapper[4931]: I0130 05:08:24.901587 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:24Z","lastTransitionTime":"2026-01-30T05:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.005026 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.005116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.005177 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.005212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.005276 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.109201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.109279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.109297 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.109671 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.109711 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.212995 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.213076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.213098 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.213129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.213150 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.316222 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.316297 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.316321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.316350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.316369 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.391378 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:06:08.700387544 +0000 UTC Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.419530 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.419612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.419630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.419656 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.419676 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.421548 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.421760 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.421917 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:25 crc kubenswrapper[4931]: E0130 05:08:25.426938 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.427041 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:25 crc kubenswrapper[4931]: E0130 05:08:25.427560 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:25 crc kubenswrapper[4931]: E0130 05:08:25.428296 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:25 crc kubenswrapper[4931]: E0130 05:08:25.428672 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.450710 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.471474 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.490332 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.509132 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.522943 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.523036 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.523055 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.523084 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.523106 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.525519 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.545187 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.570097 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.589381 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.609189 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.626783 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.626835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.626853 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.626881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.626902 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.631250 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.652612 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.688354 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.713912 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.730278 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.730350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.730370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.730486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.730507 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.735968 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.758350 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.783018 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.819178 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87290a8fe38d346f2c0b20f0e53e42593ca5132dec04b7b2f163e3355d7865f4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610022 6226 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:17.610537 6226 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 05:08:17.611197 6226 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:17.611235 6226 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:17.611280 6226 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:17.611321 6226 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:17.611337 6226 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:17.611361 6226 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:17.611362 6226 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:17.611384 6226 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:17.611466 6226 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:17.611468 6226 factory.go:656] Stopping watch factory\\\\nI0130 05:08:17.611490 6226 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.833585 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.833631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.833650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.833681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.833701 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.897627 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:25 crc kubenswrapper[4931]: E0130 05:08:25.897912 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:25 crc kubenswrapper[4931]: E0130 05:08:25.898104 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:29.898058301 +0000 UTC m=+45.267968678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.936974 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.937034 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.937053 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.937087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:25 crc kubenswrapper[4931]: I0130 05:08:25.937107 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:25Z","lastTransitionTime":"2026-01-30T05:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.040352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.040458 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.040480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.040509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.040533 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.149096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.149199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.149222 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.149254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.149281 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.252792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.252875 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.252901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.252934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.252961 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.356908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.356996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.357016 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.357048 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.357067 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.392183 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:33:32.404804631 +0000 UTC Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.460510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.460586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.460610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.460648 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.460674 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.563634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.563726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.563753 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.563785 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.563808 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.667375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.667486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.667517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.667542 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.667563 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.771572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.771627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.771644 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.771674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.771693 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.876196 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.876251 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.876263 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.876282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.876295 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.980184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.980261 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.980281 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.980312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:26 crc kubenswrapper[4931]: I0130 05:08:26.980334 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:26Z","lastTransitionTime":"2026-01-30T05:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.083668 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.083712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.083730 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.083752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.083773 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.187964 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.188044 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.188064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.188094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.188114 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.292189 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.292252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.292272 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.292306 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.292329 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.392993 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:16:52.966770067 +0000 UTC Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.395763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.395809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.395826 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.395853 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.395870 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.421391 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.421591 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:27 crc kubenswrapper[4931]: E0130 05:08:27.421818 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.422067 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.422150 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:27 crc kubenswrapper[4931]: E0130 05:08:27.422320 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:27 crc kubenswrapper[4931]: E0130 05:08:27.422626 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:27 crc kubenswrapper[4931]: E0130 05:08:27.422842 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.498841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.498929 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.498962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.498998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.499026 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.603097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.603179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.603203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.603235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.603268 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.707750 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.707852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.707875 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.707909 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.707931 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.811682 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.811777 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.811801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.811834 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.811855 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.915576 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.915652 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.915671 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.915705 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:27 crc kubenswrapper[4931]: I0130 05:08:27.915730 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:27Z","lastTransitionTime":"2026-01-30T05:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.019109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.019578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.019734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.019872 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.020017 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.123276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.123356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.123375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.123404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.123462 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.234149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.234262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.234282 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.234309 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.234330 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.338527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.338601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.338620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.338650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.338672 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.393523 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:04:33.413322942 +0000 UTC Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.442221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.442699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.442945 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.443213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.443644 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.547885 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.548236 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.548587 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.548856 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.549149 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.652397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.652483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.652502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.652529 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.652546 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.755517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.755608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.755639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.755678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.755703 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.859952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.860005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.860027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.860059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.860084 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.963746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.963834 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.963865 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.963900 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:28 crc kubenswrapper[4931]: I0130 05:08:28.963923 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:28Z","lastTransitionTime":"2026-01-30T05:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.067245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.067714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.067948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.068019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.068043 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.170994 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.171068 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.171089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.171118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.171138 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.275226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.275300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.275318 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.275347 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.275363 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.379036 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.379111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.379132 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.379163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.379182 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.394446 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:38:40.375586001 +0000 UTC Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.421091 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.421208 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.421371 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.421458 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:29 crc kubenswrapper[4931]: E0130 05:08:29.421560 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:29 crc kubenswrapper[4931]: E0130 05:08:29.421664 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:29 crc kubenswrapper[4931]: E0130 05:08:29.421974 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:29 crc kubenswrapper[4931]: E0130 05:08:29.422137 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.483680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.483736 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.483761 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.483788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.483803 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.586801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.586846 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.586859 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.586878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.586891 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.690478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.690555 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.690572 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.690600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.690617 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.794674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.794746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.794767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.794793 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.794815 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.901540 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.902512 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.902535 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.902565 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.902587 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:29Z","lastTransitionTime":"2026-01-30T05:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:29 crc kubenswrapper[4931]: I0130 05:08:29.954637 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:29 crc kubenswrapper[4931]: E0130 05:08:29.954884 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:29 crc kubenswrapper[4931]: E0130 05:08:29.955017 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:37.954983128 +0000 UTC m=+53.324893425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.006993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.007063 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.007084 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.007114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.007137 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.111956 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.112021 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.112040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.112067 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.112087 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.216166 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.216220 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.216237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.216264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.216282 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.324400 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.324551 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.324579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.324608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.324635 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.394580 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:33:40.440856961 +0000 UTC Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.427792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.427848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.427869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.427893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.427913 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.531531 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.531593 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.531611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.531639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.531660 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.642350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.642406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.642422 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.642454 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.642465 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.754761 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.754809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.754828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.754850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.754867 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.858734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.859264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.859777 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.860009 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.860222 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.964057 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.964533 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.964801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.965032 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:30 crc kubenswrapper[4931]: I0130 05:08:30.965244 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:30Z","lastTransitionTime":"2026-01-30T05:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.068256 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.068303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.068320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.068343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.068361 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.171326 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.171375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.171395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.171417 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.171472 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.273579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.273642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.273659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.273683 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.273700 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.377010 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.377066 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.377087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.377110 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.377132 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.395484 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:14:08.533080136 +0000 UTC Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.421773 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.421801 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.421930 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:31 crc kubenswrapper[4931]: E0130 05:08:31.422002 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.422065 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:31 crc kubenswrapper[4931]: E0130 05:08:31.422157 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:31 crc kubenswrapper[4931]: E0130 05:08:31.422214 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:31 crc kubenswrapper[4931]: E0130 05:08:31.422401 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.479618 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.479672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.479689 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.479715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.479733 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.582652 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.582717 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.582738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.582766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.582783 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.685285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.685611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.685698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.685808 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.685891 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.825608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.825692 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.825716 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.826098 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.826641 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.931020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.931091 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.931120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.931151 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:31 crc kubenswrapper[4931]: I0130 05:08:31.931172 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:31Z","lastTransitionTime":"2026-01-30T05:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.034869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.034949 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.034969 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.034996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.035016 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.138209 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.138274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.138305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.138337 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.138361 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.241685 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.241739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.241753 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.241775 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.241795 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.344575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.344665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.344684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.344710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.344728 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.396547 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:37:04.518855358 +0000 UTC Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.422992 4931 scope.go:117] "RemoveContainer" containerID="dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.443744 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.448127 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.448188 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.448207 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.448235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.448259 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.466766 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.485871 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.512232 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.533220 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.551391 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.553060 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.553149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.553181 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.553216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.553241 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.572772 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.593168 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.615476 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.642955 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.650978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.651040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.651061 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.651094 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.651117 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.667615 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: E0130 05:08:32.671070 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.677249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.677307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.677327 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.677355 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.677377 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.696122 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: E0130 05:08:32.698637 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.703776 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.703824 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.703844 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.703869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.703889 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.719952 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: E0130 05:08:32.723917 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.735849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.735918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.735942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.735972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.735995 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.744507 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: E0130 05:08:32.753422 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.758506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.759031 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.759054 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.759079 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.759109 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.763851 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: E0130 05:08:32.779086 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: E0130 05:08:32.779232 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.781489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.781302 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.781534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.781720 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.781780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.781809 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.809336 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.836267 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/1.log" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.840581 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.840768 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.865568 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.885504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.885595 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.885620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.885658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.885683 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.914173 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.938158 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.969377 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.984337 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:32Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.988398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.988488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.988510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.988541 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:32 crc kubenswrapper[4931]: I0130 05:08:32.988564 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:32Z","lastTransitionTime":"2026-01-30T05:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.052362 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.075394 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.090545 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.090575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.090585 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.090603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.090616 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.097060 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.119239 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.130219 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.141521 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.153084 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.165777 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.176922 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.187096 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.193163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.193199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.193211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.193229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.193241 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.200514 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.217282 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.295948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.296044 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.296058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.296078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.296093 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.396851 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:18:22.04971318 +0000 UTC Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.399169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.399326 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.399356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.399390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.399416 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.421058 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.421153 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.421194 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:33 crc kubenswrapper[4931]: E0130 05:08:33.421346 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.421638 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:33 crc kubenswrapper[4931]: E0130 05:08:33.421754 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:33 crc kubenswrapper[4931]: E0130 05:08:33.421983 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:33 crc kubenswrapper[4931]: E0130 05:08:33.422217 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.503348 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.503446 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.503466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.503494 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.503513 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.606507 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.606582 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.606603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.606630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.606650 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.709322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.709388 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.709406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.709508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.709575 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.813221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.813304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.813327 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.813358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.813382 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.846973 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/2.log" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.847977 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/1.log" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.851917 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26" exitCode=1 Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.851989 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.852054 4931 scope.go:117] "RemoveContainer" containerID="dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.853177 4931 scope.go:117] "RemoveContainer" containerID="43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26" Jan 30 05:08:33 crc kubenswrapper[4931]: E0130 05:08:33.853500 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.879451 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.905003 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.916452 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.916568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.916596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.916626 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.916650 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:33Z","lastTransitionTime":"2026-01-30T05:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.926694 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.949209 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.972092 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:33 crc kubenswrapper[4931]: I0130 05:08:33.997223 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:33Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.018613 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.020965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.021096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.021167 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.021239 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.021300 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.050531 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.073300 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.089738 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.104037 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.119333 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.124006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.124057 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.124071 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.124096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.124110 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.133593 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.147801 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.168394 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.186382 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.203301 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.227466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.227532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.227552 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.227579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.227599 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.331816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.332222 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.332376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.332585 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.332751 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.397347 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:00:00.473161663 +0000 UTC Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.436066 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.436137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.436158 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.436184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.436205 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.539293 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.539373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.539397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.539462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.539495 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.642877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.642950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.642968 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.642994 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.643018 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.746727 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.746789 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.746810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.746834 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.746856 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.816580 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.829134 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.840297 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.849908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.849968 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.849993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.850026 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.850051 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.859780 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/2.log" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.860242 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.882812 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.904865 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.928890 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.943610 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.944774 4931 scope.go:117] "RemoveContainer" containerID="43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26" Jan 30 05:08:34 crc kubenswrapper[4931]: E0130 05:08:34.945006 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.949130 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.953095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.953195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.953253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.953280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.953298 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:34Z","lastTransitionTime":"2026-01-30T05:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:34 crc kubenswrapper[4931]: I0130 05:08:34.974776 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:34Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.013085 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dffbdbccf345b155eeca2fb349ba6b9e2a9a2f1251e04b84dc4aac74331fb431\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:18Z\\\",\\\"message\\\":\\\"08:18.741461 6358 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:18.741588 6358 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 05:08:18.741211 6358 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 05:08:18.741681 6358 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:18.741705 6358 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:18.741783 6358 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:18.741792 6358 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:18.741800 6358 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:18.741808 6358 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:18.741833 6358 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:18.741843 6358 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:18.741853 6358 factory.go:656] Stopping watch factory\\\\nI0130 05:08:18.741870 6358 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0130 05:08:18.741878 6358 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.049559 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.059051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.059112 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.059132 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.059163 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.059183 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.067728 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.085323 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.102119 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.121028 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.136797 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.150887 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.162852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.162929 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.162949 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.162977 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.162997 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.168001 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.183171 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.204068 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.219834 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.282836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.282890 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.282902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.282928 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.282943 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.287616 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.306213 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.323727 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.345307 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.358226 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.368865 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.378566 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.385762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.385831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.385849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.385878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.385896 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.393636 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.398568 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:56:39.028715506 +0000 UTC Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.405161 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.417478 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.420947 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.421044 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:35 crc kubenswrapper[4931]: E0130 05:08:35.421129 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.421171 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:35 crc kubenswrapper[4931]: E0130 05:08:35.421209 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.421263 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:35 crc kubenswrapper[4931]: E0130 05:08:35.421410 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:35 crc kubenswrapper[4931]: E0130 05:08:35.421559 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.432076 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.448673 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.462581 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.482357 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.488802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.488858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.488877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.488903 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.488948 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.502841 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.524710 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.550918 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.587779 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.591930 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.591999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.592019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.592044 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.592063 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.618459 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.640221 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.660010 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.679684 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.693317 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.695384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.695436 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.695449 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.695466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.695479 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.709488 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.732815 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.754031 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.777123 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.797648 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.799026 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.799097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.799116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.799140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.799156 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.819094 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.837377 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.857333 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.881926 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.902768 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.902839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.902859 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.902888 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.902908 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:35Z","lastTransitionTime":"2026-01-30T05:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.907376 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:35 crc kubenswrapper[4931]: I0130 05:08:35.923922 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:35Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.006639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.006733 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.006754 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.006787 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.006809 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.111796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.111863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.111883 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.111914 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.111936 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.215953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.216025 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.216045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.216078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.216099 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.320650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.320707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.320718 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.320740 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.320754 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.399022 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:51:54.501664458 +0000 UTC Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.424127 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.424184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.424203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.424231 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.424251 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.528574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.528650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.528668 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.528706 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.528726 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.632252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.632317 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.632338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.632368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.632388 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.736481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.736564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.736586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.736614 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.736633 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.850760 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.850818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.850839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.850870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.850897 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.954477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.954532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.954550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.954577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:36 crc kubenswrapper[4931]: I0130 05:08:36.954595 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:36Z","lastTransitionTime":"2026-01-30T05:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.050617 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.051010 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:09:09.050949528 +0000 UTC m=+84.420859825 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.057513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.057571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.057591 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.057620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.057640 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.152095 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.152164 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.152188 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.152218 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152287 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152442 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152462 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:09:09.15239147 +0000 UTC m=+84.522301767 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152519 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:09:09.152497193 +0000 UTC m=+84.522407450 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152520 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152556 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152631 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152660 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152748 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:09:09.152714709 +0000 UTC m=+84.522624996 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152573 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152801 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.152906 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:09:09.152876803 +0000 UTC m=+84.522787260 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.160637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.160704 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.160723 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.160753 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.160776 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.264162 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.264215 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.264233 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.264262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.264281 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.367678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.367739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.367757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.367791 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.367810 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.399690 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:57:20.171866399 +0000 UTC Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.421711 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.421798 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.421927 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.421942 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.422092 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.422223 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.422456 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.422561 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.470396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.470489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.470509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.470538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.470561 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.573524 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.573596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.573615 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.573642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.573667 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.677187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.677235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.677253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.677278 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.677299 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.781490 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.781577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.781606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.781639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.781663 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.884599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.884674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.884693 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.884721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.884739 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.964264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.964511 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: E0130 05:08:37.964613 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:08:53.96459396 +0000 UTC m=+69.334504217 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.987241 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.987287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.987299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.987316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:37 crc kubenswrapper[4931]: I0130 05:08:37.987328 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:37Z","lastTransitionTime":"2026-01-30T05:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.090778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.090907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.090931 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.090962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.090985 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.194903 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.194952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.194966 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.194985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.195000 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.298080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.298125 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.298135 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.298154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.298167 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.399860 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:00:42.489423639 +0000 UTC Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.401329 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.401385 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.401402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.401454 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.401475 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.504355 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.504493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.504514 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.504538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.504555 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.607640 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.607677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.607689 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.607706 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.607717 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.710564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.710625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.710642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.710666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.710684 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.813659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.813711 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.813730 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.813755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.813774 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.916229 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.916276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.916289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.916307 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:38 crc kubenswrapper[4931]: I0130 05:08:38.916320 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:38Z","lastTransitionTime":"2026-01-30T05:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.019734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.019788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.019806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.019829 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.019847 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.122793 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.122879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.122898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.122925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.122945 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.226213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.226317 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.226341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.226371 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.226391 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.330028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.330113 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.330138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.330178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.330198 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.400181 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:29:44.225109561 +0000 UTC Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.421671 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.421727 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.421862 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.422136 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:39 crc kubenswrapper[4931]: E0130 05:08:39.422139 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:39 crc kubenswrapper[4931]: E0130 05:08:39.422348 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:39 crc kubenswrapper[4931]: E0130 05:08:39.422397 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:39 crc kubenswrapper[4931]: E0130 05:08:39.422510 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.433040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.433083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.433095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.433121 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.433138 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.537311 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.537361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.537380 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.537405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.537454 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.641023 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.641122 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.641140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.641167 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.641189 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.745263 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.745343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.745374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.745407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.745467 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.849415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.849545 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.849575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.849611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.849633 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.953236 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.953359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.953389 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.953475 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:39 crc kubenswrapper[4931]: I0130 05:08:39.953510 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:39Z","lastTransitionTime":"2026-01-30T05:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.056948 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.057029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.057054 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.057085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.057107 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.160503 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.160584 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.160604 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.160635 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.160659 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.264276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.264351 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.264369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.264400 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.264446 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.369481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.369557 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.369577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.369602 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.369621 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.400802 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:42:44.225339105 +0000 UTC Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.473134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.473202 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.473223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.473249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.473268 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.577645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.577705 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.577722 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.577748 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.577767 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.681221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.681289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.681312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.681342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.681363 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.785492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.785553 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.785571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.785596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.785620 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.889105 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.889206 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.889233 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.889273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.889299 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.992847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.992928 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.992955 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.992993 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:40 crc kubenswrapper[4931]: I0130 05:08:40.993019 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:40Z","lastTransitionTime":"2026-01-30T05:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.096313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.096407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.096465 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.096502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.096563 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.200511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.200577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.200601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.200631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.200656 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.305504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.305578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.305597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.305626 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.305646 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.401341 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:23:00.576839445 +0000 UTC Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.409259 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.409321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.409340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.409370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.409390 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.421526 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.421614 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.421527 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:41 crc kubenswrapper[4931]: E0130 05:08:41.421688 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:41 crc kubenswrapper[4931]: E0130 05:08:41.421854 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:41 crc kubenswrapper[4931]: E0130 05:08:41.422030 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.422721 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:41 crc kubenswrapper[4931]: E0130 05:08:41.422968 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.512564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.512627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.512646 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.512672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.512692 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.617100 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.617180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.617199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.617234 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.617255 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.720755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.720808 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.720817 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.720837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.720854 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.999276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.999363 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.999383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.999413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:41 crc kubenswrapper[4931]: I0130 05:08:41.999478 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:41Z","lastTransitionTime":"2026-01-30T05:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.102970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.103032 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.103045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.103101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.103118 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.206457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.206544 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.206561 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.206590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.206614 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.310747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.310825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.310850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.310884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.310906 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.402311 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:49:51.382253107 +0000 UTC Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.414508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.414663 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.414694 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.414740 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.414767 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.519283 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.519372 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.519401 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.519466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.519493 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.623466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.623910 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.624068 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.624280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.624490 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.728289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.728357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.728374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.728406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.728453 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.832876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.833316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.833513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.833680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.833833 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.938030 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.938548 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.938957 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.939149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.939338 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.949914 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.949994 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.950021 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.950058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.950088 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:42 crc kubenswrapper[4931]: E0130 05:08:42.973060 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:42Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.979665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.979734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.979759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.979792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:42 crc kubenswrapper[4931]: I0130 05:08:42.979817 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:42Z","lastTransitionTime":"2026-01-30T05:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.002980 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:42Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.008613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.008867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.009161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.009330 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.009523 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.037831 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:43Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.044841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.044901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.044922 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.044952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.044975 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.067954 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:43Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.074144 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.074205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.074223 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.074247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.074265 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.098462 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:43Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.098627 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.101284 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.101362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.101391 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.101483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.101517 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.205388 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.205495 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.205515 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.205550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.205571 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.309598 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.309665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.309684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.309711 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.309732 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.402747 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:11:21.851445623 +0000 UTC Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.413406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.413520 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.413548 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.413580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.413611 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.422016 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.422087 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.422207 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.422241 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.422616 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.422864 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.423038 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:43 crc kubenswrapper[4931]: E0130 05:08:43.423151 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.520537 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.520650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.520676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.520708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.520731 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.625106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.625192 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.625215 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.625243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.625264 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.728963 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.729035 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.729063 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.729092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.729113 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.833250 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.833321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.833341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.833369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.833390 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.937771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.937829 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.937845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.937868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:43 crc kubenswrapper[4931]: I0130 05:08:43.937883 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:43Z","lastTransitionTime":"2026-01-30T05:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.041732 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.041797 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.041816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.041843 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.041864 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.145971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.146037 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.146056 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.146087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.146110 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.249725 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.249803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.249823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.249853 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.249879 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.354195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.354276 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.354296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.354327 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.354351 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.403832 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:54:31.833241348 +0000 UTC Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.458207 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.458289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.458311 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.458336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.458354 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.562235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.562316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.562335 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.562362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.562382 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.666256 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.666306 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.666316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.666334 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.666346 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.769914 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.769953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.769969 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.769987 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.770002 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.874187 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.874254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.874272 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.874298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.874317 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.984900 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.984960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.984971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.984994 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:44 crc kubenswrapper[4931]: I0130 05:08:44.985008 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:44Z","lastTransitionTime":"2026-01-30T05:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.089279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.089348 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.089368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.089398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.089416 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.193356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.193445 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.193469 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.193497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.193518 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.296937 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.296999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.297018 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.297043 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.297063 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.400319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.400380 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.400398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.400475 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.400496 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.404312 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:41:08.507454626 +0000 UTC Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.421208 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.421238 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:45 crc kubenswrapper[4931]: E0130 05:08:45.421946 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.422030 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.421978 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:45 crc kubenswrapper[4931]: E0130 05:08:45.422187 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:45 crc kubenswrapper[4931]: E0130 05:08:45.422414 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:45 crc kubenswrapper[4931]: E0130 05:08:45.422666 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.446556 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.464240 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.485264 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.504224 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.504365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.504392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.504481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.504510 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.505066 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.521868 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.543232 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.575100 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.594138 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.608917 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.609059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.609231 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.609267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.610241 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.614723 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.640708 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.665730 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.690178 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.714377 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.714487 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.714510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.714539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.714559 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.725504 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.764134 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.786585 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.809325 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.817005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.817060 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.817083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.817112 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.817134 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.828649 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.850364 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:45Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.921020 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.921089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.921106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.921135 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:45 crc kubenswrapper[4931]: I0130 05:08:45.921154 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:45Z","lastTransitionTime":"2026-01-30T05:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.024876 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.025227 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.025378 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.025571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.025708 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.129295 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.129944 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.130148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.130357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.130594 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.234883 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.234945 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.234963 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.234992 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.235013 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.338866 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.338923 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.338941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.338966 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.338986 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.405304 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:34:33.20032674 +0000 UTC Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.442405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.442493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.442516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.442544 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.442566 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.545629 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.545687 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.545703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.545731 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.545749 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.649134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.649709 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.649902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.650064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.650220 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.753881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.753944 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.753962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.753989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.754010 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.857752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.858137 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.858286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.858513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.858676 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.962154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.962211 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.962228 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.962255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:46 crc kubenswrapper[4931]: I0130 05:08:46.962273 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:46Z","lastTransitionTime":"2026-01-30T05:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.071916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.071982 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.072001 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.072028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.072047 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.175713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.175784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.175803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.175831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.175854 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.278806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.278860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.278878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.278903 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.278921 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.382221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.382287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.382305 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.382370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.382390 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.406048 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:25:34.000414455 +0000 UTC Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.421717 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.421757 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.421765 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:47 crc kubenswrapper[4931]: E0130 05:08:47.421894 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.422162 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:47 crc kubenswrapper[4931]: E0130 05:08:47.423150 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.423559 4931 scope.go:117] "RemoveContainer" containerID="43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26" Jan 30 05:08:47 crc kubenswrapper[4931]: E0130 05:08:47.423256 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:47 crc kubenswrapper[4931]: E0130 05:08:47.423036 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:47 crc kubenswrapper[4931]: E0130 05:08:47.424098 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.485947 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.486000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.486011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.486031 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.486046 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.589904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.589961 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.589977 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.589999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.590013 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.693266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.693819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.693978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.694161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.694318 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.797355 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.797466 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.797489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.797518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.797537 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.901720 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.901791 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.901810 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.901836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:47 crc kubenswrapper[4931]: I0130 05:08:47.901855 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:47Z","lastTransitionTime":"2026-01-30T05:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.005608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.005703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.005730 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.005763 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.005786 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.108875 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.108982 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.109001 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.109031 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.109050 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.211534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.211599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.211646 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.211669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.211682 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.315621 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.315704 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.315721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.315780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.315804 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.407175 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:29:05.73141213 +0000 UTC Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.419184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.419227 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.419238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.419259 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.419270 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.521775 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.521856 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.521875 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.521904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.521922 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.626726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.626805 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.626826 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.626854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.626875 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.730315 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.730392 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.730417 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.730486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.730516 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.833978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.834055 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.834081 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.834118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.834146 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.937033 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.937111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.937135 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.937169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:48 crc kubenswrapper[4931]: I0130 05:08:48.937190 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:48Z","lastTransitionTime":"2026-01-30T05:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.039970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.040059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.040083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.040116 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.040144 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.143501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.143547 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.143560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.143577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.143589 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.247788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.247825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.247838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.247852 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.247861 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.351140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.351213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.351243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.351279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.351304 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.407857 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:59:05.613225936 +0000 UTC Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.421507 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.421632 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.421821 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:49 crc kubenswrapper[4931]: E0130 05:08:49.421805 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:49 crc kubenswrapper[4931]: E0130 05:08:49.421962 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:49 crc kubenswrapper[4931]: E0130 05:08:49.422162 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.422531 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:49 crc kubenswrapper[4931]: E0130 05:08:49.423025 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.454478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.454578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.454602 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.454632 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.454653 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.558487 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.558564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.558582 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.558609 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.558628 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.662256 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.662298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.662322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.662340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.662354 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.765157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.765749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.765962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.766173 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.766325 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.870598 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.871186 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.871345 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.871532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.871696 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.974319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.974362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.974374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.974390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:49 crc kubenswrapper[4931]: I0130 05:08:49.974400 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:49Z","lastTransitionTime":"2026-01-30T05:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.077502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.077538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.077547 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.077563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.077572 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.180977 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.181025 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.181040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.181056 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.181069 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.284006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.284622 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.284860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.285120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.285317 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.389186 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.389614 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.389727 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.389818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.389900 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.408825 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:23:59.561893276 +0000 UTC Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.492550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.492600 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.492612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.492630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.492641 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.596811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.596877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.596898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.596924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.596944 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.699297 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.699375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.699399 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.699481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.699505 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.803058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.803101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.803111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.803134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.803145 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.906589 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.906661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.906686 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.906716 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:50 crc kubenswrapper[4931]: I0130 05:08:50.906738 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:50Z","lastTransitionTime":"2026-01-30T05:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.010586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.010774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.010854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.011013 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.011107 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.114594 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.114651 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.114678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.114696 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.114707 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.217457 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.217525 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.217539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.217582 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.217595 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.321100 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.321143 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.321154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.321172 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.321180 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.409303 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:27:31.508773818 +0000 UTC Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.421859 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.421856 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.422002 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:51 crc kubenswrapper[4931]: E0130 05:08:51.422016 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.421989 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:51 crc kubenswrapper[4931]: E0130 05:08:51.422137 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:51 crc kubenswrapper[4931]: E0130 05:08:51.422276 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:51 crc kubenswrapper[4931]: E0130 05:08:51.422446 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.423869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.423903 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.423915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.423935 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.423955 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.526571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.526621 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.526631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.526649 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.526662 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.629017 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.629076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.629087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.629110 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.629124 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.733404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.733492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.733504 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.733524 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.733536 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.836820 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.836880 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.836892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.836914 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.836929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.939901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.939970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.939984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.940011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:51 crc kubenswrapper[4931]: I0130 05:08:51.940029 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:51Z","lastTransitionTime":"2026-01-30T05:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.042342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.042462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.042489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.042523 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.042543 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.145441 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.145522 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.145538 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.145558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.145569 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.248611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.248684 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.248703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.248734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.248753 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.352790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.352875 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.352894 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.352925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.352946 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.409689 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:27:23.730982746 +0000 UTC Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.455739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.455828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.455848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.455879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.455903 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.559170 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.559249 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.559267 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.559299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.559327 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.663412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.663493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.663510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.663534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.663550 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.766854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.766912 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.766922 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.766941 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.766952 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.870519 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.870579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.870592 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.870615 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.870628 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.974148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.974256 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.974274 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.974300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:52 crc kubenswrapper[4931]: I0130 05:08:52.974322 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:52Z","lastTransitionTime":"2026-01-30T05:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.077361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.077398 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.077407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.077443 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.077453 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.180912 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.180996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.181015 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.181078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.181094 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.283751 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.283817 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.283834 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.283860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.283881 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.343546 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.343637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.343659 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.343688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.343707 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.365813 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:53Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.370922 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.370972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.370983 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.371005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.371018 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.384768 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:53Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.388655 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.388718 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.388739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.388765 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.388785 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.407109 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:53Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.410057 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:03:57.288589207 +0000 UTC Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.410728 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.410778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.410797 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.410820 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.410836 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.421302 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.421330 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.421334 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.421397 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.421531 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.421698 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.421823 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.421963 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.427181 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:53Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.431841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.431899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.431916 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.431942 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.431961 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.444877 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:53Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.445006 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.452628 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.452980 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.453089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.453109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.453126 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.555386 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.555458 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.555473 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.555493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.555549 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.659083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.659115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.659125 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.659138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.659148 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.761620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.761662 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.761670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.761689 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.761699 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.865164 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.865228 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.865247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.865273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.865294 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.965626 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.965833 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:53 crc kubenswrapper[4931]: E0130 05:08:53.965895 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:09:25.965877133 +0000 UTC m=+101.335787400 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.968866 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.968923 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.968932 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.968949 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:53 crc kubenswrapper[4931]: I0130 05:08:53.968959 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:53Z","lastTransitionTime":"2026-01-30T05:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.071365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.071432 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.071446 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.071461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.071472 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.174333 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.174382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.174391 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.174405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.174440 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.278018 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.278076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.278085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.278101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.278112 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.380872 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.380946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.380961 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.380979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.380989 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.410654 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:31:54.034087616 +0000 UTC Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.483759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.483819 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.483828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.483845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.483856 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.587315 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.587380 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.587397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.587451 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.587470 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.690370 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.690411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.690448 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.690469 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.690481 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.793655 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.793722 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.793744 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.793774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.793795 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.895933 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.896000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.896025 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.896055 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.896076 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.997950 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.998005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.998014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.998030 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:54 crc kubenswrapper[4931]: I0130 05:08:54.998042 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:54Z","lastTransitionTime":"2026-01-30T05:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.100481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.100518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.100559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.100579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.100589 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.203608 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.203672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.203690 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.203717 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.203736 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.306797 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.306865 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.306888 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.306915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.306935 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.409732 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.409772 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.409786 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.409803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.409816 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.411059 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:04:50.291938 +0000 UTC Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.421416 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.421503 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:55 crc kubenswrapper[4931]: E0130 05:08:55.421548 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.421577 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.421576 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:55 crc kubenswrapper[4931]: E0130 05:08:55.421757 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:55 crc kubenswrapper[4931]: E0130 05:08:55.421947 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:55 crc kubenswrapper[4931]: E0130 05:08:55.422487 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.445488 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.460802 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.477823 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.498360 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.512857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.512925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.512939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.513361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.513399 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.522152 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.536231 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.552367 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.588737 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.610347 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.616748 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.616798 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.616815 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.616841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.616858 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.630250 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.645513 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.663043 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.676684 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.692826 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.707330 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.720386 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.720606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.720688 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.720771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.720817 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.724367 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.740068 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.757031 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:55Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.823384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.823470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.823488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.823510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.823528 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.925814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.925871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.925882 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.925903 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:55 crc kubenswrapper[4931]: I0130 05:08:55.925916 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:55Z","lastTransitionTime":"2026-01-30T05:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.030175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.030247 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.030269 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.030296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.030317 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.054298 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/0.log" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.054383 4931 generic.go:334] "Generic (PLEG): container finished" podID="b17d6adf-e35b-4bf8-9ab2-e6720e595835" containerID="71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899" exitCode=1 Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.054455 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerDied","Data":"71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.055063 4931 scope.go:117] "RemoveContainer" containerID="71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.073077 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.088005 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.101066 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.116889 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.128821 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.132352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.132385 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.132395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.132411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.132447 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.144486 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.159395 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.172700 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.187550 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.208963 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.224396 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.236563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.236588 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.236597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.236613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.236625 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.237157 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.256546 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.284243 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.311902 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.329525 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.339183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.339275 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.339295 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.339322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.339367 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.344516 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.361507 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:56Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.411118 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:31:36.976584002 +0000 UTC Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.442637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.442713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.442726 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.442743 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.442755 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.544953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.544990 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.545001 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.545016 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.545027 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.647561 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.647604 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.647615 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.647633 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.647647 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.751743 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.751796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.751813 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.751838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.751855 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.855302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.855344 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.855372 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.855394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.855410 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.958438 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.958500 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.958516 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.958539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:56 crc kubenswrapper[4931]: I0130 05:08:56.958557 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:56Z","lastTransitionTime":"2026-01-30T05:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.060493 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/0.log" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.060557 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerStarted","Data":"c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.062781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.062848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.062868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.062899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.062922 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.086504 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.102881 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.118109 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.133831 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.156343 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.165405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.165477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.165489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.165506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.165533 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.195606 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.251013 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.269199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.269244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.269255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.269271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.269281 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.279899 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.306896 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.318929 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.336074 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.350133 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.362481 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.371803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.371869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.371891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.371920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.371935 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.377215 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.390146 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.403228 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.412036 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:53:39.472330303 +0000 UTC Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.416053 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.421047 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.421052 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.421316 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:57 crc kubenswrapper[4931]: E0130 05:08:57.421195 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:57 crc kubenswrapper[4931]: E0130 05:08:57.421589 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.421630 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:57 crc kubenswrapper[4931]: E0130 05:08:57.421884 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:57 crc kubenswrapper[4931]: E0130 05:08:57.421691 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.433871 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:08:57Z is after 2025-08-24T17:21:41Z" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.475537 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.475985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.476242 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.476385 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.476540 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.579483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.579534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.579546 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.579570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.579586 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.682676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.682747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.682766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.682796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.682815 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.786594 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.786660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.786680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.786705 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.786723 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.889796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.889858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.889877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.889904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.889924 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.992455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.992502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.992519 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.992542 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:57 crc kubenswrapper[4931]: I0130 05:08:57.992569 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:57Z","lastTransitionTime":"2026-01-30T05:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.094747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.094804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.094824 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.094848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.094867 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.198486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.198532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.198544 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.198563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.198575 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.301666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.301721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.301738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.301762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.301779 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.405144 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.405221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.405243 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.405273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.405296 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.413248 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:12:52.093603075 +0000 UTC Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.508128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.508174 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.508193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.508219 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.508236 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.613892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.614155 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.614228 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.614301 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.614370 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.718103 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.718260 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.718286 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.718318 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.718339 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.821544 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.821631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.821658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.821687 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.821708 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.924972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.925050 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.925196 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.925230 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:58 crc kubenswrapper[4931]: I0130 05:08:58.925252 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:58Z","lastTransitionTime":"2026-01-30T05:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.028313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.028355 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.028365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.028385 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.028396 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.131055 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.131107 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.131120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.131140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.131150 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.234205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.234260 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.234271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.234292 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.234306 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.337753 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.337822 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.337837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.337861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.337873 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.414301 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:12:54.271271346 +0000 UTC Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.421812 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.421880 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:08:59 crc kubenswrapper[4931]: E0130 05:08:59.422035 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.422100 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.422108 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:08:59 crc kubenswrapper[4931]: E0130 05:08:59.422280 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:08:59 crc kubenswrapper[4931]: E0130 05:08:59.422527 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:08:59 crc kubenswrapper[4931]: E0130 05:08:59.424520 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.425190 4931 scope.go:117] "RemoveContainer" containerID="43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.440001 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.440044 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.440059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.440077 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.440096 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.543611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.543695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.543721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.543755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.543780 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.647280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.647350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.647376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.647408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.647509 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.751013 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.751052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.751066 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.751084 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.751097 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.854735 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.854790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.854802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.854825 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.854839 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.957514 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.957564 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.957574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.957597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:08:59 crc kubenswrapper[4931]: I0130 05:08:59.957609 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:08:59Z","lastTransitionTime":"2026-01-30T05:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.060407 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.060476 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.060486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.060548 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.060562 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.074460 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/2.log" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.077802 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.079000 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.096098 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.109440 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.124017 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.135550 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.149650 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.163165 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.163220 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.163232 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.163250 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.163264 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.165314 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.180215 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.194036 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.212151 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.229855 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.245504 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.264553 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.265743 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.265788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.265804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.265823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.265840 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.281206 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.303989 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.320781 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.343344 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.364668 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.368639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.368704 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.368722 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.368747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.368765 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.380690 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:00Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.414885 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:58:29.37915117 +0000 UTC Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.471710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.471738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.471749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.471762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.471772 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.574156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.574217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.574236 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.574259 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.574278 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.676944 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.677011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.677024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.677046 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.677073 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.780849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.780926 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.780946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.780970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.780987 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.884359 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.884415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.884458 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.884482 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.884499 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.986749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.986777 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.986788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.986802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:00 crc kubenswrapper[4931]: I0130 05:09:00.986813 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:00Z","lastTransitionTime":"2026-01-30T05:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.083817 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/3.log" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.084765 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/2.log" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088794 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" exitCode=1 Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088840 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088844 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088888 4931 scope.go:117] "RemoveContainer" containerID="43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.088897 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.090070 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:09:01 crc kubenswrapper[4931]: E0130 05:09:01.090354 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.121827 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.151620 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.170117 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.190104 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.192518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.192569 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.192580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.192599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.192611 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.209785 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.225593 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.253623 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.273293 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.293063 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.295031 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.295105 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.295132 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.295164 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.295185 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.312268 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.333847 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.362042 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43394fdb8a93566a07f6555221a2aa2a7990c65a6623e03d361d12fdafb9be26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:33Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:117\\\\nI0130 05:08:33.522075 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 05:08:33.522092 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 05:08:33.522157 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 05:08:33.522163 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 05:08:33.522187 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 05:08:33.522204 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 05:08:33.522222 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0130 05:08:33.522225 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0130 05:08:33.522242 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 05:08:33.522257 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 05:08:33.522264 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 05:08:33.522272 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0130 05:08:33.522290 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0130 05:08:33.522388 6561 factory.go:656] Stopping watch factory\\\\nI0130 05:08:33.522398 6561 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:09:00Z\\\",\\\"message\\\":\\\"ints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 05:09:00.314584 6959 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 05:09:00.314585 6959 services_controller.go:444] Built service openshift-dns-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314592 6959 services_controller.go:445] Built service openshift-dns-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314395 6959 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 05:09:00.314653 6959 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.379689 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.398970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.399038 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.399049 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.399072 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.399088 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.399951 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.415455 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:37:59.067283235 +0000 UTC Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.416686 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.420925 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.420976 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:01 crc kubenswrapper[4931]: E0130 05:09:01.421064 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.421168 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.421204 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:01 crc kubenswrapper[4931]: E0130 05:09:01.421265 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:01 crc kubenswrapper[4931]: E0130 05:09:01.421627 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:01 crc kubenswrapper[4931]: E0130 05:09:01.421785 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.429791 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.447379 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.462804 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:01Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.502576 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.502647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.502673 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.502717 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.502739 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.606795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.606868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.606881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.606900 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.606936 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.711658 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.711744 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.711762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.711793 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.711813 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.815397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.815508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.815528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.815558 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.815584 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.919220 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.919275 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.919297 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.919321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:01 crc kubenswrapper[4931]: I0130 05:09:01.919339 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:01Z","lastTransitionTime":"2026-01-30T05:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.023246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.023328 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.023352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.023383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.023404 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.097273 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/3.log" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.103352 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:09:02 crc kubenswrapper[4931]: E0130 05:09:02.103606 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.118565 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.127279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.127341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.127365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.127387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.127404 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.137528 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.152957 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.173572 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.194597 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.207547 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.224237 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.230320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.230415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.230464 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.230495 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.230516 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.239967 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.255837 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.270633 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.286963 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.306390 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.330122 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.335543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.335612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.335631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.335666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.335688 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.365990 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:09:00Z\\\",\\\"message\\\":\\\"ints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 05:09:00.314584 6959 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 05:09:00.314585 6959 services_controller.go:444] Built service openshift-dns-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314592 6959 services_controller.go:445] Built service openshift-dns-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314395 6959 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 05:09:00.314653 6959 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.402044 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.415856 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:59:37.441258095 +0000 UTC Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.429624 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.439812 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.439891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.439912 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.439947 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.439966 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.450553 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.474365 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:02Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.543149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.543218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.543237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.543266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.543287 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.647030 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.647104 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.647124 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.647149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.647168 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.751642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.751705 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.751723 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.751751 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.751785 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.855046 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.855136 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.855160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.855192 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.855216 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.959339 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.959416 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.959471 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.959501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:02 crc kubenswrapper[4931]: I0130 05:09:02.959526 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:02Z","lastTransitionTime":"2026-01-30T05:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.063271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.063361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.063465 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.063491 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.063506 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.167236 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.167320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.167338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.167366 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.167388 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.271563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.271715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.271736 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.271766 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.271788 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.376079 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.376153 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.376171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.376200 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.376219 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.416317 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:59:52.372201449 +0000 UTC Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.421894 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.421991 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.422006 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.422236 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.422667 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.422854 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.423244 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.423541 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.480277 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.480344 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.480365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.480394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.480418 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.557757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.557976 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.558000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.558029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.558049 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.583207 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:03Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.589774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.589850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.589870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.589899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.589918 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.613182 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:03Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.619117 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.619214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.619236 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.619262 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.619281 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.641324 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:03Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.651300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.651612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.651759 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.651910 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.652104 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.674358 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:03Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.680747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.680995 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.681142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.681289 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.681488 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.704339 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:03Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:03 crc kubenswrapper[4931]: E0130 05:09:03.705623 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.711216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.711279 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.711297 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.711347 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.711368 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.814381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.814490 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.814508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.814532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.814550 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.917327 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.917390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.917409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.917461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:03 crc kubenswrapper[4931]: I0130 05:09:03.917480 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:03Z","lastTransitionTime":"2026-01-30T05:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.020668 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.020886 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.021039 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.021190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.021340 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.124025 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.124114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.124145 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.124182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.124208 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.227706 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.227756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.227772 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.227793 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.227812 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.331244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.331314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.331337 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.331366 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.331387 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.417091 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:41:30.839384919 +0000 UTC Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.436353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.436408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.436452 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.436478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.436496 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.540838 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.540904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.540915 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.540939 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.540953 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.645268 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.645368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.645387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.645415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.645651 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.750106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.750216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.750245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.750277 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.750301 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.854666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.854732 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.854749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.854773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.854792 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.958617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.958678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.958696 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.958721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:04 crc kubenswrapper[4931]: I0130 05:09:04.958739 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:04Z","lastTransitionTime":"2026-01-30T05:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.063962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.064051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.064075 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.064119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.064146 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.169764 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.169855 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.169891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.169924 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.169942 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.274178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.274306 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.274397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.274597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.274670 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.378660 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.378762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.378783 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.378811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.378831 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.417852 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:54:15.215758887 +0000 UTC Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.421385 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.421671 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:05 crc kubenswrapper[4931]: E0130 05:09:05.421839 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:05 crc kubenswrapper[4931]: E0130 05:09:05.421621 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.421951 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:05 crc kubenswrapper[4931]: E0130 05:09:05.422040 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.422217 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:05 crc kubenswrapper[4931]: E0130 05:09:05.422311 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.448413 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.471881 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.482320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.482373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.482393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.482456 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.482498 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.490940 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.510739 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.529002 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.547825 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.572101 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.585508 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.585580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.585601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.585634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.585656 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.590822 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.613400 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.634260 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.659674 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.684922 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.688956 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.689603 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.689901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.690142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.690342 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.724727 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.752233 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.775135 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.794375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.794816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.794967 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.795090 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.795194 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.798120 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.822361 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.862540 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:09:00Z\\\",\\\"message\\\":\\\"ints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 05:09:00.314584 6959 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 05:09:00.314585 6959 services_controller.go:444] Built service openshift-dns-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314592 6959 services_controller.go:445] Built service openshift-dns-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314395 6959 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 05:09:00.314653 6959 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:05Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.900111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.900268 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.900290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.900321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:05 crc kubenswrapper[4931]: I0130 05:09:05.900342 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:05Z","lastTransitionTime":"2026-01-30T05:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.003575 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.004101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.004354 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.004641 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.004866 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.108316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.108387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.108408 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.108467 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.108493 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.212729 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.212816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.212835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.212866 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.212886 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.316714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.317142 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.317355 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.317560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.317748 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.418848 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:12:20.556816041 +0000 UTC Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.422868 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.422954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.422979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.423015 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.423036 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.527818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.527970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.527995 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.528024 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.528042 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.631501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.631590 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.631613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.631644 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.631663 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.735513 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.735587 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.735606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.735637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.735657 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.839690 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.839761 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.839780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.839814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.839833 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.943181 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.943269 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.943296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.943334 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:06 crc kubenswrapper[4931]: I0130 05:09:06.943392 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:06Z","lastTransitionTime":"2026-01-30T05:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.046746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.046829 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.046858 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.046898 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.046925 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.150785 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.150842 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.150857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.150877 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.150891 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.254260 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.254325 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.254346 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.254375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.254398 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.357981 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.358045 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.358067 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.358092 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.358110 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.420067 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:06:45.289250952 +0000 UTC Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.421473 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.421528 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.421549 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:07 crc kubenswrapper[4931]: E0130 05:09:07.421712 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.421777 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:07 crc kubenswrapper[4931]: E0130 05:09:07.422087 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:07 crc kubenswrapper[4931]: E0130 05:09:07.422538 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:07 crc kubenswrapper[4931]: E0130 05:09:07.431995 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.445189 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.462291 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.462376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.462404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.462464 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.462486 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.566031 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.566098 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.566120 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.566148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.566166 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.670250 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.670312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.670330 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.670355 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.670375 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.774230 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.774302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.774346 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.774376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.774399 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.878290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.878345 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.878364 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.878396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.878418 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.981320 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.981383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.981401 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.981450 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:07 crc kubenswrapper[4931]: I0130 05:09:07.981469 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:07Z","lastTransitionTime":"2026-01-30T05:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.084946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.085028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.085051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.085082 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.085105 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.189046 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.189133 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.189157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.189197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.189222 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.292679 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.292761 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.292780 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.292816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.292840 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.397394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.397502 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.397532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.397568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.397596 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.420563 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:59:50.12951867 +0000 UTC Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.500518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.500611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.500639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.500675 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.500703 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.604414 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.604539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.604561 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.604591 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.604614 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.708172 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.708251 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.708269 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.708330 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.708354 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.811642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.811694 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.811711 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.811739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.811756 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.914199 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.914264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.914280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.914303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:08 crc kubenswrapper[4931]: I0130 05:09:08.914322 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:08Z","lastTransitionTime":"2026-01-30T05:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.018176 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.018257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.018275 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.018304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.018326 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.058153 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.058803 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.058748406 +0000 UTC m=+148.428658713 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.122226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.122300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.122324 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.122357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.122378 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.159572 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.159664 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.159720 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.159768 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.159912 4931 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.159959 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160114 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160136 4931 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.159976 4931 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.159980 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160257 4931 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160273 4931 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160087 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.160060244 +0000 UTC m=+148.529970541 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160347 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.160319902 +0000 UTC m=+148.530230189 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160370 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.160357453 +0000 UTC m=+148.530267740 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.160392 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.160379934 +0000 UTC m=+148.530290221 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.226798 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.226880 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.226899 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.226931 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.226953 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.329832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.329918 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.329947 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.329984 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.330010 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.421284 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:54:37.472449516 +0000 UTC Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.421618 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.421695 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.421695 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.421821 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.421955 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.421970 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.422038 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:09 crc kubenswrapper[4931]: E0130 05:09:09.422154 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.432560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.432601 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.432616 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.432634 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.432647 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.535791 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.535870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.535890 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.535920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.535939 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.639798 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.639873 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.639884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.639905 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.639918 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.744059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.744125 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.744135 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.744157 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.744169 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.847339 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.847394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.847404 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.847585 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.847618 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.951702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.951789 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.951816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.951851 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:09 crc kubenswrapper[4931]: I0130 05:09:09.951881 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:09Z","lastTransitionTime":"2026-01-30T05:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.055806 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.055881 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.055902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.055934 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.055956 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.159480 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.159559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.159579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.159611 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.159635 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.263485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.263554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.263577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.263606 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.263627 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.367687 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.367808 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.367837 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.367953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.367990 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.421595 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:32:04.19626699 +0000 UTC Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.474845 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.474912 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.474933 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.474966 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.474987 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.577349 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.577413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.577455 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.577483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.577498 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.680889 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.680952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.680971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.680998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.681016 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.784104 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.784183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.784201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.784227 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.784246 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.887014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.887093 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.887113 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.887141 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.887162 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.990869 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.990953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.990985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.991015 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:10 crc kubenswrapper[4931]: I0130 05:09:10.991037 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:10Z","lastTransitionTime":"2026-01-30T05:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.094846 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.095348 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.095378 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.095412 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.095487 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.199047 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.199115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.199140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.199174 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.199198 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.301902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.302132 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.302156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.302190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.302212 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.407072 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.407133 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.407151 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.407184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.407203 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.421217 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.421226 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.421408 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.421493 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:11 crc kubenswrapper[4931]: E0130 05:09:11.421625 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.421701 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:32:26.575580243 +0000 UTC Jan 30 05:09:11 crc kubenswrapper[4931]: E0130 05:09:11.421840 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:11 crc kubenswrapper[4931]: E0130 05:09:11.422029 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:11 crc kubenswrapper[4931]: E0130 05:09:11.422165 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.510358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.510458 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.510481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.510510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.510529 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.613114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.613195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.613214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.613244 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.613265 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.716985 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.717050 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.717069 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.717096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.717116 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.820123 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.820166 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.820184 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.820205 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.820239 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.924396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.924493 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.924510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.924534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:11 crc kubenswrapper[4931]: I0130 05:09:11.924551 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:11Z","lastTransitionTime":"2026-01-30T05:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.027996 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.028042 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.028061 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.028084 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.028102 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.131695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.131747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.131765 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.131790 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.131806 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.234828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.234879 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.234897 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.234920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.234938 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.337451 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.337506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.337523 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.337546 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.337562 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.422015 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:38:54.591546017 +0000 UTC Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.440756 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.440816 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.440836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.440861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.440881 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.544093 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.544144 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.544161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.544186 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.544204 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.647940 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.648040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.648065 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.648097 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.648119 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.751390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.751481 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.751501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.751525 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.751544 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.854692 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.854758 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.854776 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.854802 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.854824 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.958733 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.958803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.958830 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.958860 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:12 crc kubenswrapper[4931]: I0130 05:09:12.958885 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:12Z","lastTransitionTime":"2026-01-30T05:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.061951 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.062006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.062022 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.062043 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.062060 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.164952 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.164997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.165014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.165036 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.165057 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.268695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.268757 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.268777 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.268807 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.268825 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.372460 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.372518 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.372536 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.372561 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.372579 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.421521 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.421557 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.421521 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.421727 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.421793 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.421959 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.422064 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.422136 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:22:02.572922745 +0000 UTC Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.422259 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.475645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.475707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.475725 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.475750 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.475768 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.579166 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.579235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.579252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.579273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.579290 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.683027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.683090 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.683109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.683131 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.683149 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.737353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.737415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.737461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.737486 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.737503 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.756122 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.761252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.761293 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.761302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.761316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.761327 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.783758 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.788908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.788973 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.788989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.789027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.789052 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.809802 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.815596 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.815648 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.815672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.815698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.815717 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.828565 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.833395 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.833580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.833602 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.833628 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.833650 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.853757 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:13Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:13 crc kubenswrapper[4931]: E0130 05:09:13.854019 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.856854 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.856920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.856947 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.856979 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.857003 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.961570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.961661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.961689 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.961770 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:13 crc kubenswrapper[4931]: I0130 05:09:13.961802 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:13Z","lastTransitionTime":"2026-01-30T05:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.065582 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.065661 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.065686 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.065714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.065734 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.169731 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.169795 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.169809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.169836 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.169852 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.272909 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.273468 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.273651 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.273792 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.273945 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.377867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.378123 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.378265 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.378413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.378616 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.422856 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:00:18.680465006 +0000 UTC Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.481639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.481718 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.481744 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.481771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.481791 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.585492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.585554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.585571 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.585597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.585617 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.688721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.688796 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.688818 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.688847 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.688868 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.793030 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.793140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.793170 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.793208 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.793233 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.896887 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.897340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.897462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.897586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:14 crc kubenswrapper[4931]: I0130 05:09:14.897673 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:14Z","lastTransitionTime":"2026-01-30T05:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.001669 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.001755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.001784 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.001823 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.001850 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.105931 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.106000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.106019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.106047 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.106070 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.210192 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.210288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.210311 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.210341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.210362 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.313803 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.313861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.313878 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.313901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.313918 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.417800 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.417870 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.417884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.417908 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.417923 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.421138 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.421179 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.421308 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:15 crc kubenswrapper[4931]: E0130 05:09:15.421463 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.421599 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:15 crc kubenswrapper[4931]: E0130 05:09:15.421760 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:15 crc kubenswrapper[4931]: E0130 05:09:15.422055 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:15 crc kubenswrapper[4931]: E0130 05:09:15.422249 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.423131 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:02:54.45188772 +0000 UTC Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.448218 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.467071 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.488818 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.504803 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.521314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.521390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.521409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.521467 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.521489 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.525316 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.583514 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.617412 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.624566 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.624629 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.624649 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.624677 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.624697 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.643840 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.662622 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.681488 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.712138 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.727777 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.727835 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.727849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.727872 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.727885 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.747912 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:09:00Z\\\",\\\"message\\\":\\\"ints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 05:09:00.314584 6959 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 05:09:00.314585 6959 services_controller.go:444] Built service openshift-dns-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314592 6959 services_controller.go:445] Built service openshift-dns-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314395 6959 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 05:09:00.314653 6959 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.764557 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.780498 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cba59fc-ee1f-450a-ab9e-2743c1bbb933\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddb77e9defc8c4121eae34daeca1948ee8aef2d6c884fb05b2a5c53e85cbe9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e913d425338029033aa1073be9668185b9f9c4dbb2560466d086b52aa6ce17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e913d425338029033aa1073be9668185b9f9c4dbb2560466d086b52aa6ce17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.798787 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.818293 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.831888 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.831946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.831965 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.831994 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.832015 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.834629 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.854154 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.872939 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:15Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.935652 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.935728 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.935746 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.935773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:15 crc kubenswrapper[4931]: I0130 05:09:15.935795 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:15Z","lastTransitionTime":"2026-01-30T05:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.038749 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.038848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.038871 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.038907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.038931 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.142848 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.142900 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.142920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.142946 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.142967 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.247375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.247496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.247620 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.247771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.247805 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.351280 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.351358 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.351377 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.351405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.351455 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.422796 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:09:16 crc kubenswrapper[4931]: E0130 05:09:16.423084 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.423399 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:45:04.421852804 +0000 UTC Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.455118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.455384 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.455621 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.455770 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.455902 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.559488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.559767 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.559895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.560070 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.560229 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.664085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.664196 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.664218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.664811 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.665031 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.769647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.769762 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.769786 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.769822 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.769844 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.873352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.873496 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.873526 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.873565 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.873594 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.977218 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.977316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.977353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.977387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:16 crc kubenswrapper[4931]: I0130 05:09:16.977409 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:16Z","lastTransitionTime":"2026-01-30T05:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.081341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.081415 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.081483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.081510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.081530 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.184638 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.184710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.184728 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.184754 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.184801 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.288369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.288505 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.288544 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.288578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.288610 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.391857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.391935 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.391960 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.391991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.392016 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.421948 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.421987 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:17 crc kubenswrapper[4931]: E0130 05:09:17.422514 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.422296 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.422063 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:17 crc kubenswrapper[4931]: E0130 05:09:17.422979 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:17 crc kubenswrapper[4931]: E0130 05:09:17.423096 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:17 crc kubenswrapper[4931]: E0130 05:09:17.423338 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.423653 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:41:34.444683837 +0000 UTC Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.495483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.495951 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.496213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.496405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.496665 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.600813 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.600874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.600893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.600921 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.600940 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.705517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.705586 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.705604 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.705630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.705649 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.809399 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.809521 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.809544 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.809576 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.809748 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.914388 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.914551 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.914587 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.914741 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:17 crc kubenswrapper[4931]: I0130 05:09:17.914777 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:17Z","lastTransitionTime":"2026-01-30T05:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.018519 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.018631 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.018652 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.018678 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.018744 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.123511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.123588 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.123609 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.123641 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.123669 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.226349 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.226413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.226470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.226503 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.226524 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.329707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.329781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.329807 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.329839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.329858 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.425514 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:53:11.113606302 +0000 UTC Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.434804 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.434874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.434892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.434917 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.435349 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.539185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.539254 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.539273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.539301 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.539320 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.643570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.643629 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.643647 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.643670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.643688 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.747674 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.747742 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.747755 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.747778 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.747792 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.852543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.852708 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.852729 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.852752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.852772 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.955814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.955901 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.955920 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.955943 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:18 crc kubenswrapper[4931]: I0130 05:09:18.955963 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:18Z","lastTransitionTime":"2026-01-30T05:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.060617 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.060698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.061000 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.061300 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.061359 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.165782 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.165867 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.165884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.165907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.165921 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.270127 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.270193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.270216 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.270246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.270269 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.374271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.374362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.374382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.374855 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.374921 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.422194 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.422237 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:19 crc kubenswrapper[4931]: E0130 05:09:19.422494 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.422580 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.422237 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:19 crc kubenswrapper[4931]: E0130 05:09:19.422692 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:19 crc kubenswrapper[4931]: E0130 05:09:19.422813 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:19 crc kubenswrapper[4931]: E0130 05:09:19.422900 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.427142 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:21:24.386510162 +0000 UTC Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.479037 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.479117 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.479138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.479177 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.479201 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.581612 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.581670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.581698 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.581729 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.581753 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.684316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.684376 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.684400 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.684471 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.684497 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.788287 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.788372 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.788397 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.788488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.788517 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.892101 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.892193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.892214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.892238 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.892257 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.995549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.995663 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.995685 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.995715 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:19 crc kubenswrapper[4931]: I0130 05:09:19.995734 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:19Z","lastTransitionTime":"2026-01-30T05:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.099822 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.099904 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.099925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.099954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.099977 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.203140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.203203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.203220 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.203245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.203265 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.306563 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.306699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.306719 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.306748 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.306768 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.409707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.409774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.409799 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.409832 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.409859 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.427676 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:02:08.75939317 +0000 UTC Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.512570 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.512680 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.512699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.512754 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.512770 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.615741 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.615831 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.615861 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.615894 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.615916 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.719268 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.719378 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.719411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.719472 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.719499 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.823270 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.823390 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.823462 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.823524 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.823546 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.930377 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.930534 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.930700 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.930892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:20 crc kubenswrapper[4931]: I0130 05:09:20.931280 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:20Z","lastTransitionTime":"2026-01-30T05:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.034964 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.035028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.035048 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.035076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.035095 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.138991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.139057 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.139078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.139111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.139132 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.242085 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.242159 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.242178 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.242204 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.242223 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.345891 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.345958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.345976 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.346005 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.346026 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.421887 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.421955 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.422009 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.422020 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:21 crc kubenswrapper[4931]: E0130 05:09:21.422142 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:21 crc kubenswrapper[4931]: E0130 05:09:21.422354 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:21 crc kubenswrapper[4931]: E0130 05:09:21.422788 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:21 crc kubenswrapper[4931]: E0130 05:09:21.422934 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.428286 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:04:18.362374625 +0000 UTC Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.449453 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.449506 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.449527 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.449550 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.449570 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.553725 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.553809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.553829 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.553859 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.553881 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.657589 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.657681 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.657701 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.657734 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.657752 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.760750 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.760808 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.760827 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.760857 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.760887 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.863807 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.863874 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.863894 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.863922 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.863941 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.967666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.967741 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.967750 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.967770 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:21 crc kubenswrapper[4931]: I0130 05:09:21.967780 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:21Z","lastTransitionTime":"2026-01-30T05:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.071028 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.071095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.071114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.071195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.071219 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.174545 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.174623 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.174641 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.174670 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.174694 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.278175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.278246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.278268 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.278296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.278317 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.382029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.382114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.382128 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.382148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.382161 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.429083 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 03:19:05.046308926 +0000 UTC Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.485602 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.485691 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.486099 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.486394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.486452 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.590134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.590203 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.590222 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.590248 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.590269 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.694086 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.694155 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.694174 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.694204 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.694224 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.798235 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.798302 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.798321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.798353 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.798372 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.901599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.901730 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.901748 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.901813 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:22 crc kubenswrapper[4931]: I0130 05:09:22.901831 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:22Z","lastTransitionTime":"2026-01-30T05:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.005106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.005175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.005197 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.005226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.005246 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.151597 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.151667 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.151707 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.151774 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.151834 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.257315 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.257406 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.257434 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.257456 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.257470 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.361981 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.362058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.362078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.362109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.362174 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.422025 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.422261 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.422463 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.422575 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.422626 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.422814 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.422954 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.423093 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.429236 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:59:21.062135588 +0000 UTC Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.465119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.465195 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.465221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.465250 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.465274 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.568321 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.568492 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.568523 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.568555 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.568638 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.671584 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.671694 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.671714 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.671743 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.671765 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.775052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.775115 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.775134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.775160 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.775182 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.878080 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.878164 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.878185 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.878209 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.878227 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.879999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.880062 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.880082 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.880110 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.880128 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.901688 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:23Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.908030 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.908106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.908119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.908144 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.908161 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.933637 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:23Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.941227 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.941326 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.941375 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.941405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.941465 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.963680 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:23Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.969841 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.969910 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.969930 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.969959 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.969980 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:23 crc kubenswrapper[4931]: E0130 05:09:23.991871 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:23Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.997179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.997234 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.997255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.997278 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:23 crc kubenswrapper[4931]: I0130 05:09:23.997295 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:23Z","lastTransitionTime":"2026-01-30T05:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: E0130 05:09:24.017583 4931 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T05:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d83649b-6a34-4b83-bc96-3ff1ac14c758\\\",\\\"systemUUID\\\":\\\"babf1111-baa6-43bf-8e98-8707b9d18072\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:24Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:24 crc kubenswrapper[4931]: E0130 05:09:24.017817 4931 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.019850 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.019902 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.019922 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.019943 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.019979 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.123849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.123932 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.123953 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.123978 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.123997 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.227237 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.227296 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.227316 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.227341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.227360 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.331049 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.331271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.331301 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.331336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.331358 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.430453 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:12:04.03570343 +0000 UTC Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.435489 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.435560 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.435581 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.435610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.435628 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.538712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.538771 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.538788 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.538809 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.538827 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.641578 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.641625 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.641639 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.641657 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.641669 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.746356 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.746551 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.746583 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.746618 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.746643 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.850338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.850454 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.850482 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.850515 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.850538 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.953958 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.954065 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.954083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.954111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:24 crc kubenswrapper[4931]: I0130 05:09:24.954134 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:24Z","lastTransitionTime":"2026-01-30T05:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.056914 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.056980 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.057006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.057051 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.057085 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.161011 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.161076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.161095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.161119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.161137 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.263954 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.264016 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.264035 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.264058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.264076 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.367001 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.367088 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.367109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.367138 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.367161 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.421164 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.421224 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.421301 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:25 crc kubenswrapper[4931]: E0130 05:09:25.421333 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.421370 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:25 crc kubenswrapper[4931]: E0130 05:09:25.421594 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:25 crc kubenswrapper[4931]: E0130 05:09:25.421727 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:25 crc kubenswrapper[4931]: E0130 05:09:25.421892 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.430742 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:23:05.381151445 +0000 UTC Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.444710 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f38b05b3-9d05-4908-8700-10648cb914ae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76cc9d5056c2d95a99de13c3f3f4a492966c6933e1bcc23fcc1e705ba2b9271a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09deb2eebd0cef929f31cb9252562c74dbe7ddce16b0db7c43e094b0f23c9aae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb56a6c73b5d4a29e34aaa83e0be6892cf5f5ba3284b18153a8fcfb60343dcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.468267 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7a23337-f7f5-446b-8a26-1a92225474df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae502c18c3cc1944f67850104e8d7819cc3893d52b877b8f23c81387b745b7d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbdbc8b311c9de87b1d5cf57e19c4283e735b6d4579e9f6e468f5dd1f925578c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1affd503f7d9b067489811090a51387fd2aa499509865030b0f93cded2062f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d86863a534719cf9493b82d08cf117a4e029d3c041f1718427c88d0ea909733\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.470970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.471063 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.471089 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.471122 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.471144 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.495069 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fafbfee2ff93d8b931969fe152f7ae3d698aaaa2d48538d4915fbfa3422f3880\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.519725 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lm7vv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b17d6adf-e35b-4bf8-9ab2-e6720e595835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:08:55Z\\\",\\\"message\\\":\\\"2026-01-30T05:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16\\\\n2026-01-30T05:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_62497d82-dbe5-4b49-9856-765125ac2f16 to /host/opt/cni/bin/\\\\n2026-01-30T05:08:10Z [verbose] multus-daemon started\\\\n2026-01-30T05:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-30T05:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kkfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lm7vv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.556977 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa6b4bfc-bb66-4371-8e78-353873344f17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20a9b5e4ff228a4713d44e1f3b6856f74e315f8a47abe7613c8e6813242ead57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://256d972e17b738e8e14a40cefd5e0d12f74ae5aba56f9c357e29f193ebd5b7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c95346dd11511d4c19dd070873cc8b459a118c5f99ff87dff21965a39bfaf08e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c364285f503e7a0c201bd75f15c5913dafb298b41647b55569ee5251b9c8b4ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0b67ef4253edb9455a783e664f610d2334412c679ac951b7f8bb24d3f58fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c45011d12ca6fe225d496dc64afc4405629ec53acc2ebcda9fe14afa462261e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719f0c75faa152591f9f72839b7e888b239077e56ef573f3bb247ce4a5a71ea7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2567d1f1e9ab3a8a35740c8725efd69a6ae2022ba1e9f8771702839d5f54c97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.575285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.575335 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.575351 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.575374 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.575394 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.583967 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f92025d1-3392-4c42-802e-b549f0bf4e7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0130 05:07:59.538717 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 05:07:59.541678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-523852778/tls.crt::/tmp/serving-cert-523852778/tls.key\\\\\\\"\\\\nI0130 05:08:05.256370 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0130 05:08:05.274783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0130 05:08:05.274829 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0130 05:08:05.274866 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0130 05:08:05.274878 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0130 05:08:05.290126 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0130 05:08:05.290156 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0130 05:08:05.290164 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0130 05:08:05.290167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0130 05:08:05.290170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0130 05:08:05.290174 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0130 05:08:05.290290 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0130 05:08:05.295265 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.608233 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.629745 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.656851 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06cb3786-294c-45f0-b414-66d84f8d5786\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06a5c2b37c07fbb37c0c500395d9b5993f80e98aadeeee8eb46fd0244833a78a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7bfd37702d6bdbd580da18ffcc234ebbd7841ada15be7651a40e707ecce25afe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79334886da3dbdc68e53047fc37af6a3b595e00416870302129378761b06d6f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8d5e58ce5b45bf04b943e52fb7ce3bb1f03be834add77024a98e04b87d7657f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://668fe0242a4500ab470502a033941344ddc824c25a664105485ff020c9b1e630\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dca75899b961dd42245ad6eb520d8c625639dda1a1c5ca494eca6346267f53af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42d7e1b38ba8c02ccddcd87a719a110d26b7906f53689726498904a122883ead\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfvtg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cdsw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.678098 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.678175 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.678193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.678214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.678254 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.689314 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"556d9fc5-72b4-4134-8074-1e9d07012763\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T05:09:00Z\\\",\\\"message\\\":\\\"ints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0130 05:09:00.314584 6959 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0130 05:09:00.314585 6959 services_controller.go:444] Built service openshift-dns-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314592 6959 services_controller.go:445] Built service openshift-dns-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0130 05:09:00.314395 6959 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 05:09:00.314653 6959 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rwbjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bshbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.706417 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cba59fc-ee1f-450a-ab9e-2743c1bbb933\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:07:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddb77e9defc8c4121eae34daeca1948ee8aef2d6c884fb05b2a5c53e85cbe9c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:07:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e913d425338029033aa1073be9668185b9f9c4dbb2560466d086b52aa6ce17c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e913d425338029033aa1073be9668185b9f9c4dbb2560466d086b52aa6ce17c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T05:07:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T05:07:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:07:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.726251 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.744448 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87d53ad37831a6ab12d1403e37e87caefd5dfa5703945f11ce17998748bbc932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.761984 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xjfpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c26ef8ba-80e9-4ce4-a950-9333ceda4fab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5081c972400ada81f4c0b439fae6d2bcd98a3ef693be6596edcdd202eb6766\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zs6rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xjfpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.779888 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"189be3dc-d439-47c2-b1f2-7413fc4b5e85\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7db0a3f8d149cf186174eeafb40cec15384675a4263dd97c7a73e1e3ba8caab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xl6mq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wfdxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.782198 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.782251 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.782271 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.782298 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.782315 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.796207 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vtnpc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99cb8b56-06fb-4497-82f8-d2ba1887be6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd74ac41f4e65cb4380bf0f4338b63ba1aa7de94874cc685cd55b7580896d00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qqgf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:10Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vtnpc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.817596 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gt48b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1421762e-4873-46cb-8c43-b8faa0cbca62\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5dkp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gt48b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.842260 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeeda6865f3327eb7eaa6fc09558dc67cacc0750e7ef858b21a2cc654668cf91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://094e7a5134e512ac6f42e5672e7a84a101f2905333079e1fd22eb08a3c579257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.860410 4931 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f069d6db-7396-4c40-9ea9-4cc66c499cb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T05:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ecc1689992632455a84de692ae57730c3d457434258e04a43892135033a3fda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61ebe22d2604043e99ae63c45b8dd031466c26632279b08cb8cab1c220d18a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T05:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dss26\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T05:08:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rmtbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T05:09:25Z is after 2025-08-24T17:21:41Z" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.886556 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.886666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.886695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.886730 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.886755 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.990212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.990293 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.990311 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.990338 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.990362 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:25Z","lastTransitionTime":"2026-01-30T05:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:25 crc kubenswrapper[4931]: I0130 05:09:25.993107 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:25 crc kubenswrapper[4931]: E0130 05:09:25.993381 4931 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:09:25 crc kubenswrapper[4931]: E0130 05:09:25.993547 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs podName:1421762e-4873-46cb-8c43-b8faa0cbca62 nodeName:}" failed. No retries permitted until 2026-01-30 05:10:29.993483541 +0000 UTC m=+165.363393838 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs") pod "network-metrics-daemon-gt48b" (UID: "1421762e-4873-46cb-8c43-b8faa0cbca62") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.094064 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.094213 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.094236 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.094381 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.094404 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.197039 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.197106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.197125 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.197152 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.197175 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.301083 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.301149 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.301171 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.301202 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.301237 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.406026 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.406111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.406131 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.406161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.406182 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.431944 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:25:00.529683599 +0000 UTC Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.510483 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.510537 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.510553 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.510580 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.510602 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.614096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.614156 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.614179 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.614208 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.614228 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.717441 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.717510 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.717528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.717554 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.717572 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.820539 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.820621 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.820642 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.820676 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.820695 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.923599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.923672 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.923691 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.923723 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:26 crc kubenswrapper[4931]: I0130 05:09:26.923744 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:26Z","lastTransitionTime":"2026-01-30T05:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.027591 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.027683 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.027703 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.027738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.027762 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.133378 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.133477 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.133497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.133574 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.133597 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.237118 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.237183 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.237201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.237225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.237243 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.340021 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.340100 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.340129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.340161 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.340182 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.421538 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.421619 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.421648 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:27 crc kubenswrapper[4931]: E0130 05:09:27.422195 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.421722 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:27 crc kubenswrapper[4931]: E0130 05:09:27.422323 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:27 crc kubenswrapper[4931]: E0130 05:09:27.422461 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:27 crc kubenswrapper[4931]: E0130 05:09:27.422647 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.433063 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:53:36.631224708 +0000 UTC Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.443910 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.443957 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.443970 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.443986 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.444000 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.552478 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.552568 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.552594 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.552630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.552661 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.657255 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.657332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.657352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.657383 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.657403 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.760633 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.760768 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.760789 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.760814 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.760832 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.864485 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.864559 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.864577 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.864610 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.864632 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.967885 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.968006 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.968029 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.968058 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:27 crc kubenswrapper[4931]: I0130 05:09:27.968080 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:27Z","lastTransitionTime":"2026-01-30T05:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.071998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.072076 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.072096 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.072126 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.072150 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.175565 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.175666 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.175699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.175741 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.175762 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.279180 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.279294 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.279322 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.279362 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.279388 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.383607 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.383675 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.383695 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.383721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.383742 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.434161 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:52:08.358951009 +0000 UTC Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.488114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.488245 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.488269 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.488303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.488331 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.592257 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.592323 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.592341 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.592368 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.592387 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.695396 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.695511 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.695532 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.695569 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.695600 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.799009 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.799385 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.799594 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.799750 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.799879 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.904169 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.904246 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.904266 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.904299 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:28 crc kubenswrapper[4931]: I0130 05:09:28.904321 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:28Z","lastTransitionTime":"2026-01-30T05:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.007889 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.007964 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.007982 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.008009 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.008037 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.111635 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.111702 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.111720 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.111747 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.111766 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.218893 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.219303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.219573 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.221109 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.221166 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.326217 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.326319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.326342 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.326373 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.326401 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.421633 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:29 crc kubenswrapper[4931]: E0130 05:09:29.421799 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.422645 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:29 crc kubenswrapper[4931]: E0130 05:09:29.422729 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.422757 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.422909 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:29 crc kubenswrapper[4931]: E0130 05:09:29.422955 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bshbf_openshift-ovn-kubernetes(556d9fc5-72b4-4134-8074-1e9d07012763)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" Jan 30 05:09:29 crc kubenswrapper[4931]: E0130 05:09:29.422977 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.423131 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:29 crc kubenswrapper[4931]: E0130 05:09:29.423200 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.429304 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.429336 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.429346 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.429360 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.429373 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.434566 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:39:45.422873232 +0000 UTC Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.532665 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.532699 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.532711 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.532725 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.532735 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.636014 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.636078 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.636095 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.636119 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.636137 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.739106 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.739221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.739240 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.739273 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.739294 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.842201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.842288 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.842312 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.842343 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.842369 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.945801 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.945863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.945884 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.945910 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:29 crc kubenswrapper[4931]: I0130 05:09:29.945929 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:29Z","lastTransitionTime":"2026-01-30T05:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.049650 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.049716 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.049738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.049770 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.049795 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.152925 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.152997 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.153019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.153052 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.153075 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.257940 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.258017 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.258040 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.258072 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.258099 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.361314 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.361394 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.361411 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.361470 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.361492 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.435322 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:57:32.502862918 +0000 UTC Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.464630 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.464700 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.464722 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.464752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.464773 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.567214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.567264 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.567290 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.567319 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.567355 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.670641 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.670721 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.670741 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.670768 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.670786 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.774148 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.774209 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.774226 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.774253 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.774272 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.878340 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.878405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.878461 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.878497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.878521 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.981696 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.981773 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.981798 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.981828 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:30 crc kubenswrapper[4931]: I0130 05:09:30.981848 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:30Z","lastTransitionTime":"2026-01-30T05:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.084793 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.084928 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.084955 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.084987 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.085051 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.189135 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.189182 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.189193 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.189212 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.189225 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.292517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.292687 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.292709 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.292735 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.292789 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.396735 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.396833 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.396863 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.396895 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.396918 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.421759 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.421763 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.421773 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:31 crc kubenswrapper[4931]: E0130 05:09:31.421996 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.422050 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:31 crc kubenswrapper[4931]: E0130 05:09:31.422288 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:31 crc kubenswrapper[4931]: E0130 05:09:31.422354 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:31 crc kubenswrapper[4931]: E0130 05:09:31.422592 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.435924 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:59:46.557802713 +0000 UTC Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.500645 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.500720 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.500738 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.500764 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.500783 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.605214 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.605285 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.605303 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.605332 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.605350 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.709043 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.709129 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.709152 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.709188 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.709214 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.813350 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.813409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.813460 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.813488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.813508 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.917134 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.917191 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.917201 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.917221 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:31 crc kubenswrapper[4931]: I0130 05:09:31.917237 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:31Z","lastTransitionTime":"2026-01-30T05:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.020613 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.020682 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.020713 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.020739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.020763 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.123906 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.123959 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.123971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.123991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.124005 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.227259 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.227327 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.227352 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.227387 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.227413 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.332225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.332306 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.332331 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.332361 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.332387 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.436071 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:20:44.696760998 +0000 UTC Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.436393 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.436528 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.436549 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.436579 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.436601 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.539999 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.540068 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.540087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.540114 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.540135 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.645409 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.645509 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.645529 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.645557 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.645577 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.749154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.749225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.749248 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.749283 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.749307 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.853125 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.853204 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.853225 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.853252 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.853269 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.956957 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.957042 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.957073 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.957140 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:32 crc kubenswrapper[4931]: I0130 05:09:32.957166 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:32Z","lastTransitionTime":"2026-01-30T05:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.059896 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.059962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.059990 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.060018 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.060039 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.164017 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.164111 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.164154 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.164192 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.164219 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.267911 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.267998 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.268027 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.268059 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.268084 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.372357 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.372535 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.372565 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.372599 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.372621 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.421851 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.422114 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.422158 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.422377 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:33 crc kubenswrapper[4931]: E0130 05:09:33.422382 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:33 crc kubenswrapper[4931]: E0130 05:09:33.422580 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:33 crc kubenswrapper[4931]: E0130 05:09:33.422676 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:33 crc kubenswrapper[4931]: E0130 05:09:33.422827 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.436994 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:30:14.893172783 +0000 UTC Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.476365 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.476488 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.476519 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.476587 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.476608 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.581313 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.581369 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.581382 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.581402 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.581418 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.684627 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.684712 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.684742 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.684776 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.684800 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.788892 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.788971 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.788991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.789019 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.789044 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.891637 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.891710 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.891728 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.891752 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.891770 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.995190 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.995366 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.995413 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.995501 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:33 crc kubenswrapper[4931]: I0130 05:09:33.995528 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:33Z","lastTransitionTime":"2026-01-30T05:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.098781 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.098839 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.098849 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.098873 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.098884 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:34Z","lastTransitionTime":"2026-01-30T05:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.201991 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.202054 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.202067 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.202087 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.202104 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:34Z","lastTransitionTime":"2026-01-30T05:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.304907 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.304962 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.304972 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.304989 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.305002 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:34Z","lastTransitionTime":"2026-01-30T05:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.373405 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.373497 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.373517 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.373543 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.373562 4931 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T05:09:34Z","lastTransitionTime":"2026-01-30T05:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.431043 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9"] Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.431635 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.434267 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.434497 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.434726 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.434837 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.437192 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:53:30.030887631 +0000 UTC Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.437244 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.448931 4931 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.491930 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rmtbw" podStartSLOduration=87.491899364 podStartE2EDuration="1m27.491899364s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.456518183 +0000 UTC m=+109.826428510" watchObservedRunningTime="2026-01-30 05:09:34.491899364 +0000 UTC m=+109.861809621" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.495768 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e05528c-e400-4c01-98f1-e97adf895d92-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.495891 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e05528c-e400-4c01-98f1-e97adf895d92-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.495949 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e05528c-e400-4c01-98f1-e97adf895d92-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.496176 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e05528c-e400-4c01-98f1-e97adf895d92-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.496756 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e05528c-e400-4c01-98f1-e97adf895d92-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.510524 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.51048984 podStartE2EDuration="1m0.51048984s" podCreationTimestamp="2026-01-30 05:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.510334676 +0000 UTC m=+109.880244933" watchObservedRunningTime="2026-01-30 05:09:34.51048984 +0000 UTC m=+109.880400137" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.564788 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lm7vv" podStartSLOduration=87.564750596 podStartE2EDuration="1m27.564750596s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.551260189 +0000 UTC m=+109.921170456" watchObservedRunningTime="2026-01-30 05:09:34.564750596 +0000 UTC m=+109.934660893" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.598533 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e05528c-e400-4c01-98f1-e97adf895d92-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.598669 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e05528c-e400-4c01-98f1-e97adf895d92-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.598707 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e05528c-e400-4c01-98f1-e97adf895d92-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.598704 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e05528c-e400-4c01-98f1-e97adf895d92-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.598748 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e05528c-e400-4c01-98f1-e97adf895d92-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.598830 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e05528c-e400-4c01-98f1-e97adf895d92-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.598980 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e05528c-e400-4c01-98f1-e97adf895d92-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.599888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e05528c-e400-4c01-98f1-e97adf895d92-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.609885 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.609845612 podStartE2EDuration="1m22.609845612s" podCreationTimestamp="2026-01-30 05:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.583696613 +0000 UTC m=+109.953606920" watchObservedRunningTime="2026-01-30 05:09:34.609845612 +0000 UTC m=+109.979755909" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.609967 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e05528c-e400-4c01-98f1-e97adf895d92-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.610254 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.610244033 podStartE2EDuration="1m29.610244033s" podCreationTimestamp="2026-01-30 05:08:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.608590065 +0000 UTC m=+109.978500322" watchObservedRunningTime="2026-01-30 05:09:34.610244033 +0000 UTC m=+109.980154330" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.623121 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e05528c-e400-4c01-98f1-e97adf895d92-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9dsd9\" (UID: \"1e05528c-e400-4c01-98f1-e97adf895d92\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.694916 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cdsw5" podStartSLOduration=87.694863231 podStartE2EDuration="1m27.694863231s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.694315955 +0000 UTC m=+110.064226222" watchObservedRunningTime="2026-01-30 05:09:34.694863231 +0000 UTC m=+110.064773488" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.771868 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.771834724 podStartE2EDuration="1m28.771834724s" podCreationTimestamp="2026-01-30 05:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.770569047 +0000 UTC m=+110.140479304" watchObservedRunningTime="2026-01-30 05:09:34.771834724 +0000 UTC m=+110.141744981" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.771923 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.834536 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xjfpj" podStartSLOduration=87.834516017 podStartE2EDuration="1m27.834516017s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.819177326 +0000 UTC m=+110.189087583" watchObservedRunningTime="2026-01-30 05:09:34.834516017 +0000 UTC m=+110.204426274" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.834634 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podStartSLOduration=87.834630791 podStartE2EDuration="1m27.834630791s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.833797996 +0000 UTC m=+110.203708253" watchObservedRunningTime="2026-01-30 05:09:34.834630791 +0000 UTC m=+110.204541048" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.845219 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vtnpc" podStartSLOduration=87.845213032 podStartE2EDuration="1m27.845213032s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.84514956 +0000 UTC m=+110.215059817" watchObservedRunningTime="2026-01-30 05:09:34.845213032 +0000 UTC m=+110.215123289" Jan 30 05:09:34 crc kubenswrapper[4931]: I0130 05:09:34.871934 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.871904196 podStartE2EDuration="27.871904196s" podCreationTimestamp="2026-01-30 05:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:34.87134603 +0000 UTC m=+110.241256297" watchObservedRunningTime="2026-01-30 05:09:34.871904196 +0000 UTC m=+110.241814463" Jan 30 05:09:35 crc kubenswrapper[4931]: I0130 05:09:35.246838 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" event={"ID":"1e05528c-e400-4c01-98f1-e97adf895d92","Type":"ContainerStarted","Data":"8928f664343ffe393101efb5da1dc68f33ba1bc5030ba2639756113dd78479a2"} Jan 30 05:09:35 crc kubenswrapper[4931]: I0130 05:09:35.246932 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" event={"ID":"1e05528c-e400-4c01-98f1-e97adf895d92","Type":"ContainerStarted","Data":"352cf410d6dc3550ccd5bbea2ce945e73032e1f1b716ddde2f2a24c7464b9087"} Jan 30 05:09:35 crc kubenswrapper[4931]: I0130 05:09:35.274032 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9dsd9" podStartSLOduration=88.274006399 podStartE2EDuration="1m28.274006399s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:35.271895627 +0000 UTC m=+110.641805924" watchObservedRunningTime="2026-01-30 05:09:35.274006399 +0000 UTC m=+110.643916696" Jan 30 05:09:35 crc kubenswrapper[4931]: I0130 05:09:35.421249 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:35 crc kubenswrapper[4931]: I0130 05:09:35.421360 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:35 crc kubenswrapper[4931]: E0130 05:09:35.421555 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:35 crc kubenswrapper[4931]: I0130 05:09:35.421576 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:35 crc kubenswrapper[4931]: I0130 05:09:35.421655 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:35 crc kubenswrapper[4931]: E0130 05:09:35.424083 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:35 crc kubenswrapper[4931]: E0130 05:09:35.424241 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:35 crc kubenswrapper[4931]: E0130 05:09:35.424476 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:37 crc kubenswrapper[4931]: I0130 05:09:37.421060 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:37 crc kubenswrapper[4931]: I0130 05:09:37.421174 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:37 crc kubenswrapper[4931]: I0130 05:09:37.421262 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:37 crc kubenswrapper[4931]: E0130 05:09:37.421255 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:37 crc kubenswrapper[4931]: I0130 05:09:37.421054 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:37 crc kubenswrapper[4931]: E0130 05:09:37.421453 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:37 crc kubenswrapper[4931]: E0130 05:09:37.421636 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:37 crc kubenswrapper[4931]: E0130 05:09:37.421754 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:39 crc kubenswrapper[4931]: I0130 05:09:39.422036 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:39 crc kubenswrapper[4931]: I0130 05:09:39.422107 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:39 crc kubenswrapper[4931]: E0130 05:09:39.422628 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:39 crc kubenswrapper[4931]: I0130 05:09:39.422196 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:39 crc kubenswrapper[4931]: I0130 05:09:39.422164 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:39 crc kubenswrapper[4931]: E0130 05:09:39.422803 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:39 crc kubenswrapper[4931]: E0130 05:09:39.423207 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:39 crc kubenswrapper[4931]: E0130 05:09:39.423369 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:41 crc kubenswrapper[4931]: I0130 05:09:41.421527 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:41 crc kubenswrapper[4931]: I0130 05:09:41.421673 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:41 crc kubenswrapper[4931]: I0130 05:09:41.421722 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:41 crc kubenswrapper[4931]: I0130 05:09:41.421737 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:41 crc kubenswrapper[4931]: E0130 05:09:41.422610 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:41 crc kubenswrapper[4931]: E0130 05:09:41.422879 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:41 crc kubenswrapper[4931]: E0130 05:09:41.423070 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:41 crc kubenswrapper[4931]: E0130 05:09:41.423185 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:42 crc kubenswrapper[4931]: I0130 05:09:42.277122 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/1.log" Jan 30 05:09:42 crc kubenswrapper[4931]: I0130 05:09:42.278654 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/0.log" Jan 30 05:09:42 crc kubenswrapper[4931]: I0130 05:09:42.278754 4931 generic.go:334] "Generic (PLEG): container finished" podID="b17d6adf-e35b-4bf8-9ab2-e6720e595835" containerID="c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0" exitCode=1 Jan 30 05:09:42 crc kubenswrapper[4931]: I0130 05:09:42.278827 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerDied","Data":"c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0"} Jan 30 05:09:42 crc kubenswrapper[4931]: I0130 05:09:42.278949 4931 scope.go:117] "RemoveContainer" containerID="71e5ae535724d0bf08b647693f5ce248b193261779f219e61f21eeb5b8263899" Jan 30 05:09:42 crc kubenswrapper[4931]: I0130 05:09:42.279606 4931 scope.go:117] "RemoveContainer" containerID="c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0" Jan 30 05:09:42 crc kubenswrapper[4931]: E0130 05:09:42.279943 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lm7vv_openshift-multus(b17d6adf-e35b-4bf8-9ab2-e6720e595835)\"" pod="openshift-multus/multus-lm7vv" podUID="b17d6adf-e35b-4bf8-9ab2-e6720e595835" Jan 30 05:09:43 crc kubenswrapper[4931]: I0130 05:09:43.285833 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/1.log" Jan 30 05:09:43 crc kubenswrapper[4931]: I0130 05:09:43.421713 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:43 crc kubenswrapper[4931]: I0130 05:09:43.421775 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:43 crc kubenswrapper[4931]: I0130 05:09:43.421713 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:43 crc kubenswrapper[4931]: I0130 05:09:43.421912 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:43 crc kubenswrapper[4931]: E0130 05:09:43.422051 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:43 crc kubenswrapper[4931]: E0130 05:09:43.422330 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:43 crc kubenswrapper[4931]: E0130 05:09:43.422403 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:43 crc kubenswrapper[4931]: E0130 05:09:43.422538 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:44 crc kubenswrapper[4931]: I0130 05:09:44.422275 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.297952 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/3.log" Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.302109 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerStarted","Data":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.302663 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.347627 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podStartSLOduration=98.347599558 podStartE2EDuration="1m38.347599558s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:09:45.346863676 +0000 UTC m=+120.716774013" watchObservedRunningTime="2026-01-30 05:09:45.347599558 +0000 UTC m=+120.717509855" Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.371457 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gt48b"] Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.371612 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:45 crc kubenswrapper[4931]: E0130 05:09:45.371713 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.421564 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.421565 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:45 crc kubenswrapper[4931]: E0130 05:09:45.421798 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:45 crc kubenswrapper[4931]: E0130 05:09:45.422072 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:45 crc kubenswrapper[4931]: I0130 05:09:45.422660 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:45 crc kubenswrapper[4931]: E0130 05:09:45.424544 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:45 crc kubenswrapper[4931]: E0130 05:09:45.451602 4931 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 05:09:45 crc kubenswrapper[4931]: E0130 05:09:45.561038 4931 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 05:09:47 crc kubenswrapper[4931]: I0130 05:09:47.422965 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:47 crc kubenswrapper[4931]: I0130 05:09:47.423490 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:47 crc kubenswrapper[4931]: E0130 05:09:47.423683 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:47 crc kubenswrapper[4931]: I0130 05:09:47.423381 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:47 crc kubenswrapper[4931]: I0130 05:09:47.424078 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:47 crc kubenswrapper[4931]: E0130 05:09:47.424326 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:47 crc kubenswrapper[4931]: E0130 05:09:47.424832 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:47 crc kubenswrapper[4931]: E0130 05:09:47.424909 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:49 crc kubenswrapper[4931]: I0130 05:09:49.421411 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:49 crc kubenswrapper[4931]: I0130 05:09:49.421554 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:49 crc kubenswrapper[4931]: I0130 05:09:49.421595 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:49 crc kubenswrapper[4931]: E0130 05:09:49.421716 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:49 crc kubenswrapper[4931]: E0130 05:09:49.421886 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:49 crc kubenswrapper[4931]: E0130 05:09:49.421982 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:49 crc kubenswrapper[4931]: I0130 05:09:49.422698 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:49 crc kubenswrapper[4931]: E0130 05:09:49.422950 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:50 crc kubenswrapper[4931]: E0130 05:09:50.562361 4931 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 05:09:51 crc kubenswrapper[4931]: I0130 05:09:51.420998 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:51 crc kubenswrapper[4931]: I0130 05:09:51.421069 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:51 crc kubenswrapper[4931]: I0130 05:09:51.421117 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:51 crc kubenswrapper[4931]: I0130 05:09:51.421019 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:51 crc kubenswrapper[4931]: E0130 05:09:51.421201 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:51 crc kubenswrapper[4931]: E0130 05:09:51.421467 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:51 crc kubenswrapper[4931]: E0130 05:09:51.421491 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:51 crc kubenswrapper[4931]: E0130 05:09:51.421554 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:53 crc kubenswrapper[4931]: I0130 05:09:53.422008 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:53 crc kubenswrapper[4931]: I0130 05:09:53.422114 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:53 crc kubenswrapper[4931]: I0130 05:09:53.422199 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:53 crc kubenswrapper[4931]: I0130 05:09:53.422229 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:53 crc kubenswrapper[4931]: E0130 05:09:53.422221 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:53 crc kubenswrapper[4931]: E0130 05:09:53.422401 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:53 crc kubenswrapper[4931]: E0130 05:09:53.422649 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:53 crc kubenswrapper[4931]: E0130 05:09:53.422775 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:55 crc kubenswrapper[4931]: I0130 05:09:55.421624 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:55 crc kubenswrapper[4931]: I0130 05:09:55.421686 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:55 crc kubenswrapper[4931]: I0130 05:09:55.421648 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:55 crc kubenswrapper[4931]: I0130 05:09:55.424932 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:55 crc kubenswrapper[4931]: E0130 05:09:55.425019 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:55 crc kubenswrapper[4931]: E0130 05:09:55.425366 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:55 crc kubenswrapper[4931]: E0130 05:09:55.425628 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:55 crc kubenswrapper[4931]: I0130 05:09:55.425733 4931 scope.go:117] "RemoveContainer" containerID="c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0" Jan 30 05:09:55 crc kubenswrapper[4931]: E0130 05:09:55.425916 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:55 crc kubenswrapper[4931]: E0130 05:09:55.563647 4931 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 05:09:56 crc kubenswrapper[4931]: I0130 05:09:56.349177 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/1.log" Jan 30 05:09:56 crc kubenswrapper[4931]: I0130 05:09:56.349244 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerStarted","Data":"9cbe0bfee502f12e8f2f3a6f1a461efb27353f5529809ccc54fecbb26b304ada"} Jan 30 05:09:57 crc kubenswrapper[4931]: I0130 05:09:57.421895 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:57 crc kubenswrapper[4931]: I0130 05:09:57.421951 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:57 crc kubenswrapper[4931]: I0130 05:09:57.422010 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:57 crc kubenswrapper[4931]: E0130 05:09:57.422208 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:57 crc kubenswrapper[4931]: I0130 05:09:57.422269 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:57 crc kubenswrapper[4931]: E0130 05:09:57.422473 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:57 crc kubenswrapper[4931]: E0130 05:09:57.422536 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:57 crc kubenswrapper[4931]: E0130 05:09:57.422620 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:09:59 crc kubenswrapper[4931]: I0130 05:09:59.421158 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:09:59 crc kubenswrapper[4931]: I0130 05:09:59.421211 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:09:59 crc kubenswrapper[4931]: I0130 05:09:59.421261 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:09:59 crc kubenswrapper[4931]: I0130 05:09:59.421298 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:09:59 crc kubenswrapper[4931]: E0130 05:09:59.422964 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 05:09:59 crc kubenswrapper[4931]: E0130 05:09:59.423172 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 05:09:59 crc kubenswrapper[4931]: E0130 05:09:59.423284 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gt48b" podUID="1421762e-4873-46cb-8c43-b8faa0cbca62" Jan 30 05:09:59 crc kubenswrapper[4931]: E0130 05:09:59.423497 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.421046 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.421052 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.421211 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.421135 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.424398 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.424906 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.425284 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.425689 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.425833 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 05:10:01 crc kubenswrapper[4931]: I0130 05:10:01.425838 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.881739 4931 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.954763 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k9mcd"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.955537 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.957615 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ww4ml"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.958149 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.959822 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ndkb"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.960853 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.962743 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fsn4r"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.963182 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.963829 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.964505 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.967494 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.967859 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.969899 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ff4lr"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.970864 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.971028 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.971408 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.971640 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.974833 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.975298 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.976038 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.979808 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6wmnm"] Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.980003 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.989929 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.990403 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.994539 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.990857 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.993882 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.991292 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.991452 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.992847 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.993947 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.994592 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 05:10:04 crc kubenswrapper[4931]: I0130 05:10:04.994833 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.012789 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.012867 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.014072 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.015336 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.015994 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.016148 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.016272 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.021676 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.021907 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.022088 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.022297 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.022457 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tbgzs"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.022637 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.022816 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.023050 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.022786 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tbgzs" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.023641 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7jf2b"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.024347 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.024506 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.024592 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.024807 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.027405 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.027903 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.028093 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.028185 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.028608 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wwdht"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.028805 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.028900 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.029010 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.028846 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.029160 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.029183 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.029206 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.029303 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.029308 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.029717 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.032481 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.033297 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.036840 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.037794 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.038172 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f8zg7"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.038720 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.039197 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.039400 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.039649 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.039968 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040479 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177d163e-7881-411f-a61b-a00e9c8bc9dc-config\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040534 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040611 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442g6\" (UniqueName: \"kubernetes.io/projected/cd36df00-a4ac-44ab-bdee-fcf018713f78-kube-api-access-442g6\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040636 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e06ad469-0fb9-47d7-90fc-3c74ef8bb833-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7242g\" (UID: \"e06ad469-0fb9-47d7-90fc-3c74ef8bb833\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040673 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vwh\" (UniqueName: \"kubernetes.io/projected/62b9975b-f28e-46de-89a0-bac3d2e7f927-kube-api-access-77vwh\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040698 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbab60d9-c5df-4396-8012-94dc987f82c2-node-pullsecrets\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040717 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a1f22c-baac-4356-9d01-ec2b51700b3a-serving-cert\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040743 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040764 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-serving-cert\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040784 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-service-ca-bundle\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040808 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040831 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-audit\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040851 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040870 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/177d163e-7881-411f-a61b-a00e9c8bc9dc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd326f4-63cb-4c1d-bb6c-98118a45f714-serving-cert\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040913 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040932 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-oauth-config\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040963 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-trusted-ca-bundle\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.040985 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-config\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041008 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-serving-cert\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041031 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041054 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041076 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041099 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041121 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qxm\" (UniqueName: \"kubernetes.io/projected/4fd326f4-63cb-4c1d-bb6c-98118a45f714-kube-api-access-22qxm\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041144 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041170 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-client-ca\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041192 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041214 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62b9975b-f28e-46de-89a0-bac3d2e7f927-machine-approver-tls\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041235 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-config\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041257 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-policies\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041276 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-dir\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041299 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62b9975b-f28e-46de-89a0-bac3d2e7f927-auth-proxy-config\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041320 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-config\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041354 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-client-ca\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041379 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd36df00-a4ac-44ab-bdee-fcf018713f78-serving-cert\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041403 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041447 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-config\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041473 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2kc7\" (UniqueName: \"kubernetes.io/projected/61a1f22c-baac-4356-9d01-ec2b51700b3a-kube-api-access-b2kc7\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041494 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b9975b-f28e-46de-89a0-bac3d2e7f927-config\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041528 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrkft\" (UniqueName: \"kubernetes.io/projected/dbab60d9-c5df-4396-8012-94dc987f82c2-kube-api-access-lrkft\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041614 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxjn\" (UniqueName: \"kubernetes.io/projected/cf0e8eba-09e8-4d9c-87de-9c57583e7276-kube-api-access-cmxjn\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041641 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-oauth-serving-cert\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041687 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/177d163e-7881-411f-a61b-a00e9c8bc9dc-images\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041713 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-config\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041770 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-encryption-config\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041797 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-etcd-client\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041823 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041874 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2wf\" (UniqueName: \"kubernetes.io/projected/177d163e-7881-411f-a61b-a00e9c8bc9dc-kube-api-access-5q2wf\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041895 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-service-ca\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041941 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwlrb\" (UniqueName: \"kubernetes.io/projected/e06ad469-0fb9-47d7-90fc-3c74ef8bb833-kube-api-access-mwlrb\") pod \"cluster-samples-operator-665b6dd947-7242g\" (UID: \"e06ad469-0fb9-47d7-90fc-3c74ef8bb833\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.041965 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbab60d9-c5df-4396-8012-94dc987f82c2-audit-dir\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.042012 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-image-import-ca\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.042039 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.042062 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvmhv\" (UniqueName: \"kubernetes.io/projected/45ceead9-96b4-4b3c-9fba-1288da84db97-kube-api-access-gvmhv\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.042628 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.042753 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wwbmr"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.045662 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r62wb"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.046299 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.047386 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.047495 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.048123 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.055401 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.060550 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-268mt"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.061043 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.061965 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.062249 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.067613 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.087513 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.089329 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.090115 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.091119 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.091479 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.092080 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.092389 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ww4ml"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.092444 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fsn4r"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.092460 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ndkb"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.097615 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.097938 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.098145 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.098409 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.098607 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.098782 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.099378 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.101943 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.102385 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.102700 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.103907 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.104074 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.104483 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.104956 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c8568"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.105674 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.105860 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.140839 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.141997 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145492 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145532 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbab60d9-c5df-4396-8012-94dc987f82c2-audit-dir\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145553 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2wf\" (UniqueName: \"kubernetes.io/projected/177d163e-7881-411f-a61b-a00e9c8bc9dc-kube-api-access-5q2wf\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145573 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-service-ca\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145595 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwlrb\" (UniqueName: \"kubernetes.io/projected/e06ad469-0fb9-47d7-90fc-3c74ef8bb833-kube-api-access-mwlrb\") pod \"cluster-samples-operator-665b6dd947-7242g\" (UID: \"e06ad469-0fb9-47d7-90fc-3c74ef8bb833\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145617 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-image-import-ca\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145637 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145655 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvmhv\" (UniqueName: \"kubernetes.io/projected/45ceead9-96b4-4b3c-9fba-1288da84db97-kube-api-access-gvmhv\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145678 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177d163e-7881-411f-a61b-a00e9c8bc9dc-config\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145699 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145721 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145745 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-442g6\" (UniqueName: \"kubernetes.io/projected/cd36df00-a4ac-44ab-bdee-fcf018713f78-kube-api-access-442g6\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145762 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e06ad469-0fb9-47d7-90fc-3c74ef8bb833-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7242g\" (UID: \"e06ad469-0fb9-47d7-90fc-3c74ef8bb833\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145785 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d9ce65b-1339-4198-ae4d-5697206eba5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145805 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vwh\" (UniqueName: \"kubernetes.io/projected/62b9975b-f28e-46de-89a0-bac3d2e7f927-kube-api-access-77vwh\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145825 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbab60d9-c5df-4396-8012-94dc987f82c2-node-pullsecrets\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145842 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a1f22c-baac-4356-9d01-ec2b51700b3a-serving-cert\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145860 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.145888 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-serving-cert\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.146006 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-service-ca-bundle\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.146895 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-service-ca-bundle\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147050 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147101 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/177d163e-7881-411f-a61b-a00e9c8bc9dc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147129 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147227 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d9ce65b-1339-4198-ae4d-5697206eba5f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147255 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-audit\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147306 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd326f4-63cb-4c1d-bb6c-98118a45f714-serving-cert\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147333 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147352 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d9ce65b-1339-4198-ae4d-5697206eba5f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147374 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-oauth-config\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147391 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-service-ca\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147446 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-config\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147485 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-serving-cert\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147505 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-trusted-ca-bundle\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147525 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-config\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147545 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpsv\" (UniqueName: \"kubernetes.io/projected/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-kube-api-access-9kpsv\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147556 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k9mcd"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147565 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147590 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147612 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147631 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d44nl\" (UniqueName: \"kubernetes.io/projected/9d9ce65b-1339-4198-ae4d-5697206eba5f-kube-api-access-d44nl\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147643 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147656 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147684 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qxm\" (UniqueName: \"kubernetes.io/projected/4fd326f4-63cb-4c1d-bb6c-98118a45f714-kube-api-access-22qxm\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147707 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147728 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-trusted-ca\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147748 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147769 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-client-ca\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147789 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62b9975b-f28e-46de-89a0-bac3d2e7f927-machine-approver-tls\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147812 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-config\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147832 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-policies\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147854 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-dir\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147873 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62b9975b-f28e-46de-89a0-bac3d2e7f927-auth-proxy-config\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147893 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-config\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147922 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-client-ca\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147964 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd36df00-a4ac-44ab-bdee-fcf018713f78-serving-cert\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.147983 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-config\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148001 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148020 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-ca\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148043 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-client\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148069 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2kc7\" (UniqueName: \"kubernetes.io/projected/61a1f22c-baac-4356-9d01-ec2b51700b3a-kube-api-access-b2kc7\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148088 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b9975b-f28e-46de-89a0-bac3d2e7f927-config\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148106 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-config\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148110 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148138 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrkft\" (UniqueName: \"kubernetes.io/projected/dbab60d9-c5df-4396-8012-94dc987f82c2-kube-api-access-lrkft\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148160 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxjn\" (UniqueName: \"kubernetes.io/projected/cf0e8eba-09e8-4d9c-87de-9c57583e7276-kube-api-access-cmxjn\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148177 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvpkw\" (UniqueName: \"kubernetes.io/projected/3ac5359b-e653-4824-ad6f-4672970dc0cc-kube-api-access-dvpkw\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148195 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-serving-cert\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148214 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/177d163e-7881-411f-a61b-a00e9c8bc9dc-images\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148233 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-oauth-serving-cert\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148259 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-config\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148288 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-encryption-config\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5359b-e653-4824-ad6f-4672970dc0cc-serving-cert\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148318 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.148338 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-etcd-client\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.150447 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-client-ca\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.150950 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-config\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.153096 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.153164 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbab60d9-c5df-4396-8012-94dc987f82c2-audit-dir\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.155257 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-service-ca\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.155653 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.156013 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-serving-cert\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.157130 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177d163e-7881-411f-a61b-a00e9c8bc9dc-config\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.157575 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.157823 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.158975 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.161026 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.161364 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-policies\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.162587 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b9975b-f28e-46de-89a0-bac3d2e7f927-config\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.163863 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.164335 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.164503 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/177d163e-7881-411f-a61b-a00e9c8bc9dc-images\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.164883 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.165282 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.165630 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.165897 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.165966 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phq4q"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.166226 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-oauth-serving-cert\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.166385 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-dir\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.166408 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.166776 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.167637 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lmnvn"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.168389 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-config\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.168697 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62b9975b-f28e-46de-89a0-bac3d2e7f927-auth-proxy-config\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.168707 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.169111 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.169750 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-image-import-ca\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.169868 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-config\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.170423 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-etcd-serving-ca\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.170658 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.170743 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.173422 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.174157 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.171349 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-config\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.175450 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.175879 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.171037 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.171171 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-client-ca\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.171142 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.176294 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.177167 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dbab60d9-c5df-4396-8012-94dc987f82c2-node-pullsecrets\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.177309 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.181542 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.182313 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.182593 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.182696 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.182809 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.182856 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-encryption-config\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.182971 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-config\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183076 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-serving-cert\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183232 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183290 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-audit\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183278 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183323 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183238 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183383 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183494 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.183776 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.184295 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.184301 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.184578 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/177d163e-7881-411f-a61b-a00e9c8bc9dc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.184693 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.184817 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.184905 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.184996 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.185055 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.185074 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e06ad469-0fb9-47d7-90fc-3c74ef8bb833-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7242g\" (UID: \"e06ad469-0fb9-47d7-90fc-3c74ef8bb833\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.185164 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.185185 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.185958 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.186241 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.186322 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.186452 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbab60d9-c5df-4396-8012-94dc987f82c2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.186868 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.187839 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd36df00-a4ac-44ab-bdee-fcf018713f78-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.187884 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.188772 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.202913 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.202834 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dbab60d9-c5df-4396-8012-94dc987f82c2-etcd-client\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.204099 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.204271 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-trusted-ca-bundle\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.204757 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/62b9975b-f28e-46de-89a0-bac3d2e7f927-machine-approver-tls\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.205120 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.205536 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd36df00-a4ac-44ab-bdee-fcf018713f78-serving-cert\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.205648 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.207034 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.208950 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-oauth-config\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.211594 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.211634 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.212108 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.212875 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.216280 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a1f22c-baac-4356-9d01-ec2b51700b3a-serving-cert\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.214628 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.215195 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.216295 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ff4lr"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.214239 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd326f4-63cb-4c1d-bb6c-98118a45f714-serving-cert\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.217343 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.218064 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.219074 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.221268 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.226741 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.226774 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.229257 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.229384 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.230030 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-scnkp"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.230463 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.230718 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.230946 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.231709 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.232814 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.233342 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.234252 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4jb99"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.234977 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.235934 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wwdht"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.236683 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tbgzs"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.238614 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.239347 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.240575 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c8568"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.241796 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.242680 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.243926 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.245257 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6wmnm"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.246715 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f8zg7"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.247992 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bj2bf"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249260 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f10add70-0777-45cf-9555-7bda3b6ebeec-trusted-ca\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249313 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249358 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d9ce65b-1339-4198-ae4d-5697206eba5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249381 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e8738e-651f-4f09-a052-1ff22028e3f3-service-ca-bundle\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249403 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5lp4\" (UniqueName: \"kubernetes.io/projected/6e653e21-3e72-4867-b39e-f374d752d503-kube-api-access-f5lp4\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249443 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8pv9\" (UniqueName: \"kubernetes.io/projected/f10add70-0777-45cf-9555-7bda3b6ebeec-kube-api-access-l8pv9\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249552 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249622 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d9ce65b-1339-4198-ae4d-5697206eba5f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249638 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249654 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e653e21-3e72-4867-b39e-f374d752d503-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249719 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d9ce65b-1339-4198-ae4d-5697206eba5f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249775 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-service-ca\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249826 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-config\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249884 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kpsv\" (UniqueName: \"kubernetes.io/projected/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-kube-api-access-9kpsv\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.249987 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79v2m\" (UniqueName: \"kubernetes.io/projected/c142d29b-ca43-49b7-8055-3175cdf9c45e-kube-api-access-79v2m\") pod \"dns-operator-744455d44c-r62wb\" (UID: \"c142d29b-ca43-49b7-8055-3175cdf9c45e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250044 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f10add70-0777-45cf-9555-7bda3b6ebeec-metrics-tls\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250073 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d44nl\" (UniqueName: \"kubernetes.io/projected/9d9ce65b-1339-4198-ae4d-5697206eba5f-kube-api-access-d44nl\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250179 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-trusted-ca\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250224 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250275 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-stats-auth\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250319 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6ps5\" (UniqueName: \"kubernetes.io/projected/21e8738e-651f-4f09-a052-1ff22028e3f3-kube-api-access-k6ps5\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250354 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-ca\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250375 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-client\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250397 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-config\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250453 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-default-certificate\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250496 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-serving-cert\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250515 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvpkw\" (UniqueName: \"kubernetes.io/projected/3ac5359b-e653-4824-ad6f-4672970dc0cc-kube-api-access-dvpkw\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250535 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e653e21-3e72-4867-b39e-f374d752d503-proxy-tls\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250565 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-metrics-certs\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250601 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c142d29b-ca43-49b7-8055-3175cdf9c45e-metrics-tls\") pod \"dns-operator-744455d44c-r62wb\" (UID: \"c142d29b-ca43-49b7-8055-3175cdf9c45e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250627 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5359b-e653-4824-ad6f-4672970dc0cc-serving-cert\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.250671 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f10add70-0777-45cf-9555-7bda3b6ebeec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.251568 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d9ce65b-1339-4198-ae4d-5697206eba5f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.252111 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-trusted-ca\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.252340 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-config\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.252365 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-52zxd"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.252986 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d9ce65b-1339-4198-ae4d-5697206eba5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.253501 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.254380 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.255571 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.255892 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-serving-cert\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.256803 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.257052 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.258451 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.259701 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.261064 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wwbmr"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.262768 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.264035 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.265623 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lmnvn"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.267402 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7jf2b"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.268897 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.269923 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.271284 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4phnt"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.273532 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.273664 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.274178 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4jb99"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.275133 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.276062 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.277335 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.278250 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phq4q"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.279311 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4phnt"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.280454 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.281602 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.282881 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.284019 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r62wb"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.285802 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52zxd"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.286901 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bj2bf"] Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.298223 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.317562 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.324858 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-client\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.338053 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.352220 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f10add70-0777-45cf-9555-7bda3b6ebeec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.353017 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f10add70-0777-45cf-9555-7bda3b6ebeec-trusted-ca\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.353217 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.353539 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e8738e-651f-4f09-a052-1ff22028e3f3-service-ca-bundle\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.353838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5lp4\" (UniqueName: \"kubernetes.io/projected/6e653e21-3e72-4867-b39e-f374d752d503-kube-api-access-f5lp4\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.353961 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8pv9\" (UniqueName: \"kubernetes.io/projected/f10add70-0777-45cf-9555-7bda3b6ebeec-kube-api-access-l8pv9\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.354345 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355156 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e653e21-3e72-4867-b39e-f374d752d503-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355266 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79v2m\" (UniqueName: \"kubernetes.io/projected/c142d29b-ca43-49b7-8055-3175cdf9c45e-kube-api-access-79v2m\") pod \"dns-operator-744455d44c-r62wb\" (UID: \"c142d29b-ca43-49b7-8055-3175cdf9c45e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355309 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f10add70-0777-45cf-9555-7bda3b6ebeec-metrics-tls\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355375 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355414 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-stats-auth\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355493 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6ps5\" (UniqueName: \"kubernetes.io/projected/21e8738e-651f-4f09-a052-1ff22028e3f3-kube-api-access-k6ps5\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-default-certificate\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e653e21-3e72-4867-b39e-f374d752d503-proxy-tls\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355885 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-metrics-certs\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.355934 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c142d29b-ca43-49b7-8055-3175cdf9c45e-metrics-tls\") pod \"dns-operator-744455d44c-r62wb\" (UID: \"c142d29b-ca43-49b7-8055-3175cdf9c45e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.357317 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e653e21-3e72-4867-b39e-f374d752d503-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.358189 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.364750 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ac5359b-e653-4824-ad6f-4672970dc0cc-serving-cert\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.378063 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.398151 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.418440 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.429407 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c142d29b-ca43-49b7-8055-3175cdf9c45e-metrics-tls\") pod \"dns-operator-744455d44c-r62wb\" (UID: \"c142d29b-ca43-49b7-8055-3175cdf9c45e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.437680 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.457910 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.477881 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.497559 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.501949 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-config\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.517312 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.537405 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.557996 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.561824 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-ca\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.577683 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.588230 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.597346 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.604757 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.618050 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.621752 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ac5359b-e653-4824-ad6f-4672970dc0cc-etcd-service-ca\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.637467 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.658801 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.678113 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.693390 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f10add70-0777-45cf-9555-7bda3b6ebeec-metrics-tls\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.699010 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.727862 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.735980 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f10add70-0777-45cf-9555-7bda3b6ebeec-trusted-ca\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.738105 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.758934 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.771855 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-metrics-certs\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.778355 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.798052 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.810909 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-default-certificate\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.818778 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.830294 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/21e8738e-651f-4f09-a052-1ff22028e3f3-stats-auth\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.838065 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.844917 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e8738e-651f-4f09-a052-1ff22028e3f3-service-ca-bundle\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.859308 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.917733 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.937459 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.951672 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e653e21-3e72-4867-b39e-f374d752d503-proxy-tls\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.958706 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.979066 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 05:10:05 crc kubenswrapper[4931]: I0130 05:10:05.998411 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.047135 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2wf\" (UniqueName: \"kubernetes.io/projected/177d163e-7881-411f-a61b-a00e9c8bc9dc-kube-api-access-5q2wf\") pod \"machine-api-operator-5694c8668f-k9mcd\" (UID: \"177d163e-7881-411f-a61b-a00e9c8bc9dc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.067998 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwlrb\" (UniqueName: \"kubernetes.io/projected/e06ad469-0fb9-47d7-90fc-3c74ef8bb833-kube-api-access-mwlrb\") pod \"cluster-samples-operator-665b6dd947-7242g\" (UID: \"e06ad469-0fb9-47d7-90fc-3c74ef8bb833\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.088902 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-442g6\" (UniqueName: \"kubernetes.io/projected/cd36df00-a4ac-44ab-bdee-fcf018713f78-kube-api-access-442g6\") pod \"authentication-operator-69f744f599-6wmnm\" (UID: \"cd36df00-a4ac-44ab-bdee-fcf018713f78\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.107001 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvmhv\" (UniqueName: \"kubernetes.io/projected/45ceead9-96b4-4b3c-9fba-1288da84db97-kube-api-access-gvmhv\") pod \"oauth-openshift-558db77b4-ww4ml\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.128307 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2kc7\" (UniqueName: \"kubernetes.io/projected/61a1f22c-baac-4356-9d01-ec2b51700b3a-kube-api-access-b2kc7\") pod \"controller-manager-879f6c89f-fsn4r\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.147264 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrkft\" (UniqueName: \"kubernetes.io/projected/dbab60d9-c5df-4396-8012-94dc987f82c2-kube-api-access-lrkft\") pod \"apiserver-76f77b778f-8ndkb\" (UID: \"dbab60d9-c5df-4396-8012-94dc987f82c2\") " pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.159042 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.167913 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxjn\" (UniqueName: \"kubernetes.io/projected/cf0e8eba-09e8-4d9c-87de-9c57583e7276-kube-api-access-cmxjn\") pod \"console-f9d7485db-ff4lr\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.169071 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.177057 4931 request.go:700] Waited for 1.010108445s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.179731 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.200006 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.200469 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.215104 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.217754 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.240321 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.241252 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.242801 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.262236 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.271448 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.278846 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.327000 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.328925 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vwh\" (UniqueName: \"kubernetes.io/projected/62b9975b-f28e-46de-89a0-bac3d2e7f927-kube-api-access-77vwh\") pod \"machine-approver-56656f9798-sh5fl\" (UID: \"62b9975b-f28e-46de-89a0-bac3d2e7f927\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.358560 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.359847 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qxm\" (UniqueName: \"kubernetes.io/projected/4fd326f4-63cb-4c1d-bb6c-98118a45f714-kube-api-access-22qxm\") pod \"route-controller-manager-6576b87f9c-5zjn4\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.382050 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.399986 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.418688 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.438112 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.448402 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.448716 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.459171 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.477133 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.492873 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.493729 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g"] Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.501943 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.517879 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.525588 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8ndkb"] Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.538689 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.558967 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.579101 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.599120 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.622584 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.639295 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.658742 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.666386 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ff4lr"] Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.678561 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.698901 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.718139 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.719023 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4"] Jan 30 05:10:06 crc kubenswrapper[4931]: W0130 05:10:06.732876 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd326f4_63cb_4c1d_bb6c_98118a45f714.slice/crio-833fcf0086ce5d914597f0c997c10afab54c09e9f589df3d6d360cb20264d686 WatchSource:0}: Error finding container 833fcf0086ce5d914597f0c997c10afab54c09e9f589df3d6d360cb20264d686: Status 404 returned error can't find the container with id 833fcf0086ce5d914597f0c997c10afab54c09e9f589df3d6d360cb20264d686 Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.737462 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.758308 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.774092 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ww4ml"] Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.777603 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-k9mcd"] Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.778610 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fsn4r"] Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.780939 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.791254 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6wmnm"] Jan 30 05:10:06 crc kubenswrapper[4931]: W0130 05:10:06.795161 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61a1f22c_baac_4356_9d01_ec2b51700b3a.slice/crio-e5797b6657e3c9082bc25bca94daca7b60cb46a9c442bf1c2289963ba55e2ade WatchSource:0}: Error finding container e5797b6657e3c9082bc25bca94daca7b60cb46a9c442bf1c2289963ba55e2ade: Status 404 returned error can't find the container with id e5797b6657e3c9082bc25bca94daca7b60cb46a9c442bf1c2289963ba55e2ade Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.799497 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.818299 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.837842 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 05:10:06 crc kubenswrapper[4931]: W0130 05:10:06.840040 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ceead9_96b4_4b3c_9fba_1288da84db97.slice/crio-58f7af397c08f51e1fad13d7c31e06e26340ff1e4667e88288913594a1b1daca WatchSource:0}: Error finding container 58f7af397c08f51e1fad13d7c31e06e26340ff1e4667e88288913594a1b1daca: Status 404 returned error can't find the container with id 58f7af397c08f51e1fad13d7c31e06e26340ff1e4667e88288913594a1b1daca Jan 30 05:10:06 crc kubenswrapper[4931]: W0130 05:10:06.840809 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod177d163e_7881_411f_a61b_a00e9c8bc9dc.slice/crio-0c76b00db36e1460a93a4d001a56b2b33fcfd76b0b28d25ce3ce7f0597833b2b WatchSource:0}: Error finding container 0c76b00db36e1460a93a4d001a56b2b33fcfd76b0b28d25ce3ce7f0597833b2b: Status 404 returned error can't find the container with id 0c76b00db36e1460a93a4d001a56b2b33fcfd76b0b28d25ce3ce7f0597833b2b Jan 30 05:10:06 crc kubenswrapper[4931]: W0130 05:10:06.841490 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd36df00_a4ac_44ab_bdee_fcf018713f78.slice/crio-37fe82f073303ad2ec26a00e779c977370715ada853bd13298f25780d264336c WatchSource:0}: Error finding container 37fe82f073303ad2ec26a00e779c977370715ada853bd13298f25780d264336c: Status 404 returned error can't find the container with id 37fe82f073303ad2ec26a00e779c977370715ada853bd13298f25780d264336c Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.859632 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.878544 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.897594 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.919221 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.939148 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.958014 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.977275 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 05:10:06 crc kubenswrapper[4931]: I0130 05:10:06.997267 4931 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.040239 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.041281 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d9ce65b-1339-4198-ae4d-5697206eba5f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.058786 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.110209 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kpsv\" (UniqueName: \"kubernetes.io/projected/606fa13b-30a4-412e-86eb-9fcb5bc8ebb6-kube-api-access-9kpsv\") pod \"console-operator-58897d9998-7jf2b\" (UID: \"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6\") " pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.115221 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d44nl\" (UniqueName: \"kubernetes.io/projected/9d9ce65b-1339-4198-ae4d-5697206eba5f-kube-api-access-d44nl\") pod \"cluster-image-registry-operator-dc59b4c8b-d7ptq\" (UID: \"9d9ce65b-1339-4198-ae4d-5697206eba5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.137994 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.142122 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvpkw\" (UniqueName: \"kubernetes.io/projected/3ac5359b-e653-4824-ad6f-4672970dc0cc-kube-api-access-dvpkw\") pod \"etcd-operator-b45778765-wwbmr\" (UID: \"3ac5359b-e653-4824-ad6f-4672970dc0cc\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.156188 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.157605 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.171569 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.179074 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.196777 4931 request.go:700] Waited for 1.94295257s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.200174 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.216138 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.218990 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.237210 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.261092 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.297196 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f10add70-0777-45cf-9555-7bda3b6ebeec-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.313093 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5lp4\" (UniqueName: \"kubernetes.io/projected/6e653e21-3e72-4867-b39e-f374d752d503-kube-api-access-f5lp4\") pod \"machine-config-controller-84d6567774-c8568\" (UID: \"6e653e21-3e72-4867-b39e-f374d752d503\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.341776 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8pv9\" (UniqueName: \"kubernetes.io/projected/f10add70-0777-45cf-9555-7bda3b6ebeec-kube-api-access-l8pv9\") pod \"ingress-operator-5b745b69d9-2sc4j\" (UID: \"f10add70-0777-45cf-9555-7bda3b6ebeec\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.357975 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56094dd-41e6-41ed-9660-73cc0a3eb1ba-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2d7cc\" (UID: \"f56094dd-41e6-41ed-9660-73cc0a3eb1ba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.386933 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79v2m\" (UniqueName: \"kubernetes.io/projected/c142d29b-ca43-49b7-8055-3175cdf9c45e-kube-api-access-79v2m\") pod \"dns-operator-744455d44c-r62wb\" (UID: \"c142d29b-ca43-49b7-8055-3175cdf9c45e\") " pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.399293 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6ps5\" (UniqueName: \"kubernetes.io/projected/21e8738e-651f-4f09-a052-1ff22028e3f3-kube-api-access-k6ps5\") pod \"router-default-5444994796-268mt\" (UID: \"21e8738e-651f-4f09-a052-1ff22028e3f3\") " pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.415603 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" event={"ID":"62b9975b-f28e-46de-89a0-bac3d2e7f927","Type":"ContainerStarted","Data":"586161ab3ede775ac3f91e597f7b0b1a477402d8abf5c30a4e8f49f56e2dd9d8"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.415669 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" event={"ID":"62b9975b-f28e-46de-89a0-bac3d2e7f927","Type":"ContainerStarted","Data":"50ef286c889de6bdaef344297b333f2d83d4717f3d694f829044cd75e6359a43"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.432889 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" event={"ID":"45ceead9-96b4-4b3c-9fba-1288da84db97","Type":"ContainerStarted","Data":"b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.432950 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" event={"ID":"45ceead9-96b4-4b3c-9fba-1288da84db97","Type":"ContainerStarted","Data":"58f7af397c08f51e1fad13d7c31e06e26340ff1e4667e88288913594a1b1daca"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.433263 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.437609 4931 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ww4ml container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" start-of-body= Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.437676 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" podUID="45ceead9-96b4-4b3c-9fba-1288da84db97" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.449932 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq"] Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.455594 4931 generic.go:334] "Generic (PLEG): container finished" podID="dbab60d9-c5df-4396-8012-94dc987f82c2" containerID="3f631eef3438b6a10b7ec9737933edad6e5d0320c35f10d8a0eecdffe6bb346f" exitCode=0 Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.455814 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" event={"ID":"dbab60d9-c5df-4396-8012-94dc987f82c2","Type":"ContainerDied","Data":"3f631eef3438b6a10b7ec9737933edad6e5d0320c35f10d8a0eecdffe6bb346f"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.455867 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" event={"ID":"dbab60d9-c5df-4396-8012-94dc987f82c2","Type":"ContainerStarted","Data":"7031bf54e3631eefcdafc9a21435a3301aa9fa45b4b5aa4cce5c515cdfbdcd56"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.460636 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" event={"ID":"e06ad469-0fb9-47d7-90fc-3c74ef8bb833","Type":"ContainerStarted","Data":"a2e9b7dac206f851fa736dfb4c87eb536c9b58782137e02ba4984a609d0e49fb"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.460696 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" event={"ID":"e06ad469-0fb9-47d7-90fc-3c74ef8bb833","Type":"ContainerStarted","Data":"55fe6f4dad892ec73a230daaa6e14e6e05918aea3140a9d1b7451b371834b786"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.467488 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ff4lr" event={"ID":"cf0e8eba-09e8-4d9c-87de-9c57583e7276","Type":"ContainerStarted","Data":"0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.467522 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ff4lr" event={"ID":"cf0e8eba-09e8-4d9c-87de-9c57583e7276","Type":"ContainerStarted","Data":"6ef4e3652e767b58bcd714efc40fa7c13d1316dc132366e3239b8378ad811289"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.472232 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" event={"ID":"61a1f22c-baac-4356-9d01-ec2b51700b3a","Type":"ContainerStarted","Data":"5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.472273 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" event={"ID":"61a1f22c-baac-4356-9d01-ec2b51700b3a","Type":"ContainerStarted","Data":"e5797b6657e3c9082bc25bca94daca7b60cb46a9c442bf1c2289963ba55e2ade"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.472450 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.474467 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" event={"ID":"177d163e-7881-411f-a61b-a00e9c8bc9dc","Type":"ContainerStarted","Data":"1d8da87a29ce5940ad3ab1452a8ec7b489200512c995b66af845eadbd6ebfb32"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.474534 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" event={"ID":"177d163e-7881-411f-a61b-a00e9c8bc9dc","Type":"ContainerStarted","Data":"0c76b00db36e1460a93a4d001a56b2b33fcfd76b0b28d25ce3ce7f0597833b2b"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.476192 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" event={"ID":"cd36df00-a4ac-44ab-bdee-fcf018713f78","Type":"ContainerStarted","Data":"77ea8e35397d40af396e6f38d330aa0147be23d8777295af50f404fccf6ee812"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.476214 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" event={"ID":"cd36df00-a4ac-44ab-bdee-fcf018713f78","Type":"ContainerStarted","Data":"37fe82f073303ad2ec26a00e779c977370715ada853bd13298f25780d264336c"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.477973 4931 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fsn4r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.478009 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" podUID="61a1f22c-baac-4356-9d01-ec2b51700b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.478475 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" event={"ID":"4fd326f4-63cb-4c1d-bb6c-98118a45f714","Type":"ContainerStarted","Data":"8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.478530 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" event={"ID":"4fd326f4-63cb-4c1d-bb6c-98118a45f714","Type":"ContainerStarted","Data":"833fcf0086ce5d914597f0c997c10afab54c09e9f589df3d6d360cb20264d686"} Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.480858 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.482111 4931 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5zjn4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.482153 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" podUID="4fd326f4-63cb-4c1d-bb6c-98118a45f714" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.488344 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7jf2b"] Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507730 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32e4a367-9945-4fdb-b5bc-4c8d35512264-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507803 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-certificates\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507821 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcjxk\" (UniqueName: \"kubernetes.io/projected/8158446c-5883-48ad-86da-77db470d8214-kube-api-access-rcjxk\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507850 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-audit-policies\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507868 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507885 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-trusted-ca\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507904 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlqvm\" (UniqueName: \"kubernetes.io/projected/fc8b1aac-27e5-4f8c-a329-821c231fb7c6-kube-api-access-qlqvm\") pod \"downloads-7954f5f757-tbgzs\" (UID: \"fc8b1aac-27e5-4f8c-a329-821c231fb7c6\") " pod="openshift-console/downloads-7954f5f757-tbgzs" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507929 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nltg\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-kube-api-access-6nltg\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507949 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-config\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507965 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34eba67-19a6-4d4e-a902-9482b2847199-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.507982 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25a99ace-2c29-419e-b5de-3f11b024ee43-audit-dir\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508004 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-bound-sa-token\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508038 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/907e32f6-7e41-43fb-862c-c6a5f835ff73-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508055 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-encryption-config\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508073 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvk4j\" (UniqueName: \"kubernetes.io/projected/907e32f6-7e41-43fb-862c-c6a5f835ff73-kube-api-access-hvk4j\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508189 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-serving-cert\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508223 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8158446c-5883-48ad-86da-77db470d8214-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508241 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkvt\" (UniqueName: \"kubernetes.io/projected/25a99ace-2c29-419e-b5de-3f11b024ee43-kube-api-access-9dkvt\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f183f0-7d5c-45c9-88a3-df19bd214439-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508275 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-etcd-client\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508290 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a34eba67-19a6-4d4e-a902-9482b2847199-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508304 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-tls\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.508324 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.509641 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sx7q\" (UniqueName: \"kubernetes.io/projected/a34eba67-19a6-4d4e-a902-9482b2847199-kube-api-access-8sx7q\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.509675 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.509692 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.509711 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/907e32f6-7e41-43fb-862c-c6a5f835ff73-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.512145 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f183f0-7d5c-45c9-88a3-df19bd214439-config\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.512235 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63f183f0-7d5c-45c9-88a3-df19bd214439-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.512336 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.512162 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" Jan 30 05:10:07 crc kubenswrapper[4931]: E0130 05:10:07.512965 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.012942107 +0000 UTC m=+143.382852364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.513079 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8158446c-5883-48ad-86da-77db470d8214-serving-cert\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.513269 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32e4a367-9945-4fdb-b5bc-4c8d35512264-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.522528 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wwbmr"] Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.523917 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.531963 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.538657 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.552626 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" Jan 30 05:10:07 crc kubenswrapper[4931]: W0130 05:10:07.609161 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e8738e_651f_4f09_a052_1ff22028e3f3.slice/crio-5451d8c83316bfe9169f10fc93c4221f7918a21b39649b65fa46ec5547e8d5ea WatchSource:0}: Error finding container 5451d8c83316bfe9169f10fc93c4221f7918a21b39649b65fa46ec5547e8d5ea: Status 404 returned error can't find the container with id 5451d8c83316bfe9169f10fc93c4221f7918a21b39649b65fa46ec5547e8d5ea Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.616080 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:07 crc kubenswrapper[4931]: E0130 05:10:07.616792 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.116732704 +0000 UTC m=+143.486642961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617154 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617237 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617289 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-config\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617336 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/907e32f6-7e41-43fb-862c-c6a5f835ff73-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617360 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-proxy-tls\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617444 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-certs\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617475 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f183f0-7d5c-45c9-88a3-df19bd214439-config\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617514 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63f183f0-7d5c-45c9-88a3-df19bd214439-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617538 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gbb\" (UniqueName: \"kubernetes.io/projected/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-kube-api-access-f7gbb\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617592 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617616 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2b7b\" (UniqueName: \"kubernetes.io/projected/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-kube-api-access-l2b7b\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617731 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8158446c-5883-48ad-86da-77db470d8214-serving-cert\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0071e157-d4b7-40ac-8a50-35f9b7aa961d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617950 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbwmm\" (UniqueName: \"kubernetes.io/projected/8ec2553d-d0b3-4b15-a42c-73c1c25ea70f-kube-api-access-cbwmm\") pod \"migrator-59844c95c7-gb6js\" (UID: \"8ec2553d-d0b3-4b15-a42c-73c1c25ea70f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.617990 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-serving-cert\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618014 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ksg\" (UniqueName: \"kubernetes.io/projected/aa1d7187-eeb1-4145-8ba0-dd1e43023003-kube-api-access-x5ksg\") pod \"package-server-manager-789f6589d5-mg25x\" (UID: \"aa1d7187-eeb1-4145-8ba0-dd1e43023003\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618050 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq7k\" (UniqueName: \"kubernetes.io/projected/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-kube-api-access-xmq7k\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618091 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32e4a367-9945-4fdb-b5bc-4c8d35512264-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618149 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-mountpoint-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618173 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618200 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618235 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48aa89b7-0ab1-432b-a693-2d56358c1d83-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618362 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-855th\" (UniqueName: \"kubernetes.io/projected/1a8f99a6-f163-4720-8eb4-bc8607753d79-kube-api-access-855th\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618396 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-signing-cabundle\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618678 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32e4a367-9945-4fdb-b5bc-4c8d35512264-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.618714 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clphc\" (UniqueName: \"kubernetes.io/projected/6db474c1-8489-43b1-bb9e-0961f9dc1dc4-kube-api-access-clphc\") pod \"multus-admission-controller-857f4d67dd-lmnvn\" (UID: \"6db474c1-8489-43b1-bb9e-0961f9dc1dc4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.619751 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/907e32f6-7e41-43fb-862c-c6a5f835ff73-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.619939 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32e4a367-9945-4fdb-b5bc-4c8d35512264-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.621729 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f183f0-7d5c-45c9-88a3-df19bd214439-config\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: E0130 05:10:07.622646 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.122618739 +0000 UTC m=+143.492528996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.623572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-signing-key\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.623694 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcjxk\" (UniqueName: \"kubernetes.io/projected/8158446c-5883-48ad-86da-77db470d8214-kube-api-access-rcjxk\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.624488 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-certificates\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.624725 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.624790 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6db474c1-8489-43b1-bb9e-0961f9dc1dc4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lmnvn\" (UID: \"6db474c1-8489-43b1-bb9e-0961f9dc1dc4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.625145 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-audit-policies\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.626965 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7krc\" (UniqueName: \"kubernetes.io/projected/e521b474-9f29-4841-a365-ed1589358607-kube-api-access-m7krc\") pod \"control-plane-machine-set-operator-78cbb6b69f-w2zzb\" (UID: \"e521b474-9f29-4841-a365-ed1589358607\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.628640 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qt2\" (UniqueName: \"kubernetes.io/projected/0071e157-d4b7-40ac-8a50-35f9b7aa961d-kube-api-access-56qt2\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.629110 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-trusted-ca\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.629146 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlqvm\" (UniqueName: \"kubernetes.io/projected/fc8b1aac-27e5-4f8c-a329-821c231fb7c6-kube-api-access-qlqvm\") pod \"downloads-7954f5f757-tbgzs\" (UID: \"fc8b1aac-27e5-4f8c-a329-821c231fb7c6\") " pod="openshift-console/downloads-7954f5f757-tbgzs" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.629481 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.630771 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-certificates\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.631530 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-audit-policies\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.631775 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/107d8fb1-31b1-4bec-8d55-a27e312609b1-kube-api-access-r2g64\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.631813 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-config\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.631877 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-node-bootstrap-token\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.632021 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nltg\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-kube-api-access-6nltg\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.632062 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34eba67-19a6-4d4e-a902-9482b2847199-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.632127 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25a99ace-2c29-419e-b5de-3f11b024ee43-audit-dir\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.632598 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-trusted-ca\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.633206 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-config\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.634082 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32e4a367-9945-4fdb-b5bc-4c8d35512264-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.635112 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8158446c-5883-48ad-86da-77db470d8214-serving-cert\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636166 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-bound-sa-token\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636331 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a8f99a6-f163-4720-8eb4-bc8607753d79-secret-volume\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636364 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-plugins-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636386 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-tmpfs\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636577 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/907e32f6-7e41-43fb-862c-c6a5f835ff73-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636608 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msk64\" (UniqueName: \"kubernetes.io/projected/eb3c187d-243f-457f-b419-e02a6898fd48-kube-api-access-msk64\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636633 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-socket-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636677 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-encryption-config\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636705 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvk4j\" (UniqueName: \"kubernetes.io/projected/907e32f6-7e41-43fb-862c-c6a5f835ff73-kube-api-access-hvk4j\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636725 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-images\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636747 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa1d7187-eeb1-4145-8ba0-dd1e43023003-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mg25x\" (UID: \"aa1d7187-eeb1-4145-8ba0-dd1e43023003\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.636921 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr6c9\" (UniqueName: \"kubernetes.io/projected/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-kube-api-access-xr6c9\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.637063 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-apiservice-cert\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.637383 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34eba67-19a6-4d4e-a902-9482b2847199-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.637479 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8ts\" (UniqueName: \"kubernetes.io/projected/48aa89b7-0ab1-432b-a693-2d56358c1d83-kube-api-access-zk8ts\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.637679 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25a99ace-2c29-419e-b5de-3f11b024ee43-audit-dir\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.638347 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.644546 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28cq\" (UniqueName: \"kubernetes.io/projected/bc314d0c-da50-4607-93e1-5bece9c3b2b1-kube-api-access-r28cq\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.645577 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-encryption-config\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.645768 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/907e32f6-7e41-43fb-862c-c6a5f835ff73-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.645819 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cl26\" (UniqueName: \"kubernetes.io/projected/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-kube-api-access-6cl26\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.645993 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.646027 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-registration-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.646612 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a8f99a6-f163-4720-8eb4-bc8607753d79-config-volume\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.646655 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-serving-cert\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.646678 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldhj\" (UniqueName: \"kubernetes.io/projected/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-kube-api-access-tldhj\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.647132 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b53f53c-0afe-4574-aebc-64c6d81f10d9-cert\") pod \"ingress-canary-52zxd\" (UID: \"6b53f53c-0afe-4574-aebc-64c6d81f10d9\") " pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.647261 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8158446c-5883-48ad-86da-77db470d8214-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.647300 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-webhook-cert\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.651356 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkvt\" (UniqueName: \"kubernetes.io/projected/25a99ace-2c29-419e-b5de-3f11b024ee43-kube-api-access-9dkvt\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.651457 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f183f0-7d5c-45c9-88a3-df19bd214439-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.651481 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-etcd-client\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.651517 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a34eba67-19a6-4d4e-a902-9482b2847199-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653229 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-tls\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653462 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb3c187d-243f-457f-b419-e02a6898fd48-srv-cert\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653518 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-config-volume\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653550 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj727\" (UniqueName: \"kubernetes.io/projected/6b53f53c-0afe-4574-aebc-64c6d81f10d9-kube-api-access-pj727\") pod \"ingress-canary-52zxd\" (UID: \"6b53f53c-0afe-4574-aebc-64c6d81f10d9\") " pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653569 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-metrics-tls\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653620 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0071e157-d4b7-40ac-8a50-35f9b7aa961d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653684 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48aa89b7-0ab1-432b-a693-2d56358c1d83-srv-cert\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653714 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653776 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb3c187d-243f-457f-b419-e02a6898fd48-profile-collector-cert\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653805 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e521b474-9f29-4841-a365-ed1589358607-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w2zzb\" (UID: \"e521b474-9f29-4841-a365-ed1589358607\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653832 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-csi-data-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.653878 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sx7q\" (UniqueName: \"kubernetes.io/projected/a34eba67-19a6-4d4e-a902-9482b2847199-kube-api-access-8sx7q\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.654375 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a99ace-2c29-419e-b5de-3f11b024ee43-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.657161 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8158446c-5883-48ad-86da-77db470d8214-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.660684 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-serving-cert\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.661550 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a34eba67-19a6-4d4e-a902-9482b2847199-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.662284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25a99ace-2c29-419e-b5de-3f11b024ee43-etcd-client\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.662490 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f183f0-7d5c-45c9-88a3-df19bd214439-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.664126 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6fa9de7-6e76-472c-99c8-51a6c52eb6ae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l98zf\" (UID: \"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.668101 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-tls\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.683894 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63f183f0-7d5c-45c9-88a3-df19bd214439-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zfq7h\" (UID: \"63f183f0-7d5c-45c9-88a3-df19bd214439\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.706583 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcjxk\" (UniqueName: \"kubernetes.io/projected/8158446c-5883-48ad-86da-77db470d8214-kube-api-access-rcjxk\") pod \"openshift-config-operator-7777fb866f-wwdht\" (UID: \"8158446c-5883-48ad-86da-77db470d8214\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.747626 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlqvm\" (UniqueName: \"kubernetes.io/projected/fc8b1aac-27e5-4f8c-a329-821c231fb7c6-kube-api-access-qlqvm\") pod \"downloads-7954f5f757-tbgzs\" (UID: \"fc8b1aac-27e5-4f8c-a329-821c231fb7c6\") " pod="openshift-console/downloads-7954f5f757-tbgzs" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.755760 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756038 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clphc\" (UniqueName: \"kubernetes.io/projected/6db474c1-8489-43b1-bb9e-0961f9dc1dc4-kube-api-access-clphc\") pod \"multus-admission-controller-857f4d67dd-lmnvn\" (UID: \"6db474c1-8489-43b1-bb9e-0961f9dc1dc4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756063 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-855th\" (UniqueName: \"kubernetes.io/projected/1a8f99a6-f163-4720-8eb4-bc8607753d79-kube-api-access-855th\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756082 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-signing-cabundle\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756150 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-signing-key\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756172 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6db474c1-8489-43b1-bb9e-0961f9dc1dc4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lmnvn\" (UID: \"6db474c1-8489-43b1-bb9e-0961f9dc1dc4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756198 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7krc\" (UniqueName: \"kubernetes.io/projected/e521b474-9f29-4841-a365-ed1589358607-kube-api-access-m7krc\") pod \"control-plane-machine-set-operator-78cbb6b69f-w2zzb\" (UID: \"e521b474-9f29-4841-a365-ed1589358607\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756217 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56qt2\" (UniqueName: \"kubernetes.io/projected/0071e157-d4b7-40ac-8a50-35f9b7aa961d-kube-api-access-56qt2\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756240 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/107d8fb1-31b1-4bec-8d55-a27e312609b1-kube-api-access-r2g64\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-node-bootstrap-token\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756290 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a8f99a6-f163-4720-8eb4-bc8607753d79-secret-volume\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756307 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-plugins-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756323 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-tmpfs\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756351 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msk64\" (UniqueName: \"kubernetes.io/projected/eb3c187d-243f-457f-b419-e02a6898fd48-kube-api-access-msk64\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756381 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-socket-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756684 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-images\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756713 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa1d7187-eeb1-4145-8ba0-dd1e43023003-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mg25x\" (UID: \"aa1d7187-eeb1-4145-8ba0-dd1e43023003\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756752 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr6c9\" (UniqueName: \"kubernetes.io/projected/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-kube-api-access-xr6c9\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756788 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-apiservice-cert\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756816 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk8ts\" (UniqueName: \"kubernetes.io/projected/48aa89b7-0ab1-432b-a693-2d56358c1d83-kube-api-access-zk8ts\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756839 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r28cq\" (UniqueName: \"kubernetes.io/projected/bc314d0c-da50-4607-93e1-5bece9c3b2b1-kube-api-access-r28cq\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756871 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cl26\" (UniqueName: \"kubernetes.io/projected/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-kube-api-access-6cl26\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756913 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756937 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-registration-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.756992 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tldhj\" (UniqueName: \"kubernetes.io/projected/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-kube-api-access-tldhj\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757008 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a8f99a6-f163-4720-8eb4-bc8607753d79-config-volume\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757027 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b53f53c-0afe-4574-aebc-64c6d81f10d9-cert\") pod \"ingress-canary-52zxd\" (UID: \"6b53f53c-0afe-4574-aebc-64c6d81f10d9\") " pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757047 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-webhook-cert\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757087 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb3c187d-243f-457f-b419-e02a6898fd48-srv-cert\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757101 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-config-volume\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757118 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj727\" (UniqueName: \"kubernetes.io/projected/6b53f53c-0afe-4574-aebc-64c6d81f10d9-kube-api-access-pj727\") pod \"ingress-canary-52zxd\" (UID: \"6b53f53c-0afe-4574-aebc-64c6d81f10d9\") " pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757135 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-metrics-tls\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757154 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0071e157-d4b7-40ac-8a50-35f9b7aa961d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757175 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48aa89b7-0ab1-432b-a693-2d56358c1d83-srv-cert\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757194 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb3c187d-243f-457f-b419-e02a6898fd48-profile-collector-cert\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757212 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e521b474-9f29-4841-a365-ed1589358607-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w2zzb\" (UID: \"e521b474-9f29-4841-a365-ed1589358607\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757231 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-csi-data-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-config\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757289 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-certs\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757305 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-proxy-tls\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757330 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gbb\" (UniqueName: \"kubernetes.io/projected/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-kube-api-access-f7gbb\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757371 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2b7b\" (UniqueName: \"kubernetes.io/projected/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-kube-api-access-l2b7b\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757398 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0071e157-d4b7-40ac-8a50-35f9b7aa961d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757441 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbwmm\" (UniqueName: \"kubernetes.io/projected/8ec2553d-d0b3-4b15-a42c-73c1c25ea70f-kube-api-access-cbwmm\") pod \"migrator-59844c95c7-gb6js\" (UID: \"8ec2553d-d0b3-4b15-a42c-73c1c25ea70f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757457 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-serving-cert\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757473 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5ksg\" (UniqueName: \"kubernetes.io/projected/aa1d7187-eeb1-4145-8ba0-dd1e43023003-kube-api-access-x5ksg\") pod \"package-server-manager-789f6589d5-mg25x\" (UID: \"aa1d7187-eeb1-4145-8ba0-dd1e43023003\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757493 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmq7k\" (UniqueName: \"kubernetes.io/projected/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-kube-api-access-xmq7k\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757512 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-mountpoint-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757535 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757551 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.757568 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48aa89b7-0ab1-432b-a693-2d56358c1d83-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.758965 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-tmpfs\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.759521 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-config\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: E0130 05:10:07.759951 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.259911687 +0000 UTC m=+143.629822114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.760857 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-signing-cabundle\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.761275 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-plugins-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.763182 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.763256 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-registration-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.763865 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-apiservice-cert\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.763896 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6db474c1-8489-43b1-bb9e-0961f9dc1dc4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lmnvn\" (UID: \"6db474c1-8489-43b1-bb9e-0961f9dc1dc4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.764046 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-socket-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.764401 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-images\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.764876 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-config-volume\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.765284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0071e157-d4b7-40ac-8a50-35f9b7aa961d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.765297 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.766172 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-mountpoint-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.766971 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a8f99a6-f163-4720-8eb4-bc8607753d79-config-volume\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.767352 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa1d7187-eeb1-4145-8ba0-dd1e43023003-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mg25x\" (UID: \"aa1d7187-eeb1-4145-8ba0-dd1e43023003\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.768137 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.768311 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/107d8fb1-31b1-4bec-8d55-a27e312609b1-csi-data-dir\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.771742 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-signing-key\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.772166 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a8f99a6-f163-4720-8eb4-bc8607753d79-secret-volume\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.772727 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-serving-cert\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.773769 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eb3c187d-243f-457f-b419-e02a6898fd48-srv-cert\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.774834 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48aa89b7-0ab1-432b-a693-2d56358c1d83-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.775113 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-node-bootstrap-token\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.774359 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b53f53c-0afe-4574-aebc-64c6d81f10d9-cert\") pod \"ingress-canary-52zxd\" (UID: \"6b53f53c-0afe-4574-aebc-64c6d81f10d9\") " pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.776483 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nltg\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-kube-api-access-6nltg\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.778946 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvk4j\" (UniqueName: \"kubernetes.io/projected/907e32f6-7e41-43fb-862c-c6a5f835ff73-kube-api-access-hvk4j\") pod \"openshift-apiserver-operator-796bbdcf4f-dzzxq\" (UID: \"907e32f6-7e41-43fb-862c-c6a5f835ff73\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.779603 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eb3c187d-243f-457f-b419-e02a6898fd48-profile-collector-cert\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.779912 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.782776 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0071e157-d4b7-40ac-8a50-35f9b7aa961d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.784616 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-certs\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.799005 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.800206 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-bound-sa-token\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.800856 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-metrics-tls\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.802200 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48aa89b7-0ab1-432b-a693-2d56358c1d83-srv-cert\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.810283 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e521b474-9f29-4841-a365-ed1589358607-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-w2zzb\" (UID: \"e521b474-9f29-4841-a365-ed1589358607\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.810294 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-webhook-cert\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.810292 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.814293 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-proxy-tls\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.822749 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-r62wb"] Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.823210 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sx7q\" (UniqueName: \"kubernetes.io/projected/a34eba67-19a6-4d4e-a902-9482b2847199-kube-api-access-8sx7q\") pod \"openshift-controller-manager-operator-756b6f6bc6-wqjcv\" (UID: \"a34eba67-19a6-4d4e-a902-9482b2847199\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.839612 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkvt\" (UniqueName: \"kubernetes.io/projected/25a99ace-2c29-419e-b5de-3f11b024ee43-kube-api-access-9dkvt\") pod \"apiserver-7bbb656c7d-pspt5\" (UID: \"25a99ace-2c29-419e-b5de-3f11b024ee43\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.858723 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:07 crc kubenswrapper[4931]: E0130 05:10:07.859501 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.359482194 +0000 UTC m=+143.729392451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:07 crc kubenswrapper[4931]: W0130 05:10:07.859589 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc142d29b_ca43_49b7_8055_3175cdf9c45e.slice/crio-1799b4060a05f1ed7b73abfd6c9a938d2a59baf01655db5a69f70b4b95f0ace7 WatchSource:0}: Error finding container 1799b4060a05f1ed7b73abfd6c9a938d2a59baf01655db5a69f70b4b95f0ace7: Status 404 returned error can't find the container with id 1799b4060a05f1ed7b73abfd6c9a938d2a59baf01655db5a69f70b4b95f0ace7 Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.868088 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc"] Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.881130 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldhj\" (UniqueName: \"kubernetes.io/projected/0fbcd41f-dfc6-4581-9f07-74e0efea3f1a-kube-api-access-tldhj\") pod \"machine-config-operator-74547568cd-5c95n\" (UID: \"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: W0130 05:10:07.882017 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf56094dd_41e6_41ed_9660_73cc0a3eb1ba.slice/crio-7d2922e3069bcc03e72eeb30ba7364f02c8649416551786a4d4e8647ed60e55d WatchSource:0}: Error finding container 7d2922e3069bcc03e72eeb30ba7364f02c8649416551786a4d4e8647ed60e55d: Status 404 returned error can't find the container with id 7d2922e3069bcc03e72eeb30ba7364f02c8649416551786a4d4e8647ed60e55d Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.885639 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.897646 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7krc\" (UniqueName: \"kubernetes.io/projected/e521b474-9f29-4841-a365-ed1589358607-kube-api-access-m7krc\") pod \"control-plane-machine-set-operator-78cbb6b69f-w2zzb\" (UID: \"e521b474-9f29-4841-a365-ed1589358607\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.915288 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56qt2\" (UniqueName: \"kubernetes.io/projected/0071e157-d4b7-40ac-8a50-35f9b7aa961d-kube-api-access-56qt2\") pod \"kube-storage-version-migrator-operator-b67b599dd-84v42\" (UID: \"0071e157-d4b7-40ac-8a50-35f9b7aa961d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.956197 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2g64\" (UniqueName: \"kubernetes.io/projected/107d8fb1-31b1-4bec-8d55-a27e312609b1-kube-api-access-r2g64\") pod \"csi-hostpathplugin-bj2bf\" (UID: \"107d8fb1-31b1-4bec-8d55-a27e312609b1\") " pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.960237 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:07 crc kubenswrapper[4931]: E0130 05:10:07.960786 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.460768925 +0000 UTC m=+143.830679182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.971283 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-855th\" (UniqueName: \"kubernetes.io/projected/1a8f99a6-f163-4720-8eb4-bc8607753d79-kube-api-access-855th\") pod \"collect-profiles-29495820-5cp8g\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.986726 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.991176 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clphc\" (UniqueName: \"kubernetes.io/projected/6db474c1-8489-43b1-bb9e-0961f9dc1dc4-kube-api-access-clphc\") pod \"multus-admission-controller-857f4d67dd-lmnvn\" (UID: \"6db474c1-8489-43b1-bb9e-0961f9dc1dc4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:07 crc kubenswrapper[4931]: I0130 05:10:07.992936 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j"] Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.007810 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.008061 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk8ts\" (UniqueName: \"kubernetes.io/projected/48aa89b7-0ab1-432b-a693-2d56358c1d83-kube-api-access-zk8ts\") pod \"olm-operator-6b444d44fb-n4fvr\" (UID: \"48aa89b7-0ab1-432b-a693-2d56358c1d83\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.016870 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tbgzs" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.020742 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.029366 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r28cq\" (UniqueName: \"kubernetes.io/projected/bc314d0c-da50-4607-93e1-5bece9c3b2b1-kube-api-access-r28cq\") pod \"marketplace-operator-79b997595-phq4q\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.030818 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c8568"] Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.038786 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cl26\" (UniqueName: \"kubernetes.io/projected/63899a5c-b7ca-4eca-8418-c6b9bc1f774b-kube-api-access-6cl26\") pod \"packageserver-d55dfcdfc-gx6j5\" (UID: \"63899a5c-b7ca-4eca-8418-c6b9bc1f774b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.039062 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.056001 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr6c9\" (UniqueName: \"kubernetes.io/projected/fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce-kube-api-access-xr6c9\") pod \"dns-default-4phnt\" (UID: \"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce\") " pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.071225 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.071885 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.571863125 +0000 UTC m=+143.941773382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.082754 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msk64\" (UniqueName: \"kubernetes.io/projected/eb3c187d-243f-457f-b419-e02a6898fd48-kube-api-access-msk64\") pod \"catalog-operator-68c6474976-rxvzv\" (UID: \"eb3c187d-243f-457f-b419-e02a6898fd48\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.094309 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj727\" (UniqueName: \"kubernetes.io/projected/6b53f53c-0afe-4574-aebc-64c6d81f10d9-kube-api-access-pj727\") pod \"ingress-canary-52zxd\" (UID: \"6b53f53c-0afe-4574-aebc-64c6d81f10d9\") " pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.119977 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbwmm\" (UniqueName: \"kubernetes.io/projected/8ec2553d-d0b3-4b15-a42c-73c1c25ea70f-kube-api-access-cbwmm\") pod \"migrator-59844c95c7-gb6js\" (UID: \"8ec2553d-d0b3-4b15-a42c-73c1c25ea70f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" Jan 30 05:10:08 crc kubenswrapper[4931]: W0130 05:10:08.139601 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf10add70_0777_45cf_9555_7bda3b6ebeec.slice/crio-115185d2e989f25f8a64ef9857f8ea6b8d666b3ac9799afa22f6a808267acdca WatchSource:0}: Error finding container 115185d2e989f25f8a64ef9857f8ea6b8d666b3ac9799afa22f6a808267acdca: Status 404 returned error can't find the container with id 115185d2e989f25f8a64ef9857f8ea6b8d666b3ac9799afa22f6a808267acdca Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.147414 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmq7k\" (UniqueName: \"kubernetes.io/projected/d11d03bb-be3f-43b0-a59b-5d9fde1c9717-kube-api-access-xmq7k\") pod \"service-ca-operator-777779d784-6pfwg\" (UID: \"d11d03bb-be3f-43b0-a59b-5d9fde1c9717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.147684 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.158055 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2b7b\" (UniqueName: \"kubernetes.io/projected/b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e-kube-api-access-l2b7b\") pod \"machine-config-server-scnkp\" (UID: \"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e\") " pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.162821 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:08 crc kubenswrapper[4931]: W0130 05:10:08.168710 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e653e21_3e72_4867_b39e_f374d752d503.slice/crio-f46822f967f3b67713b2f25757d023699ad723ad5288c2d15d844d908c545c94 WatchSource:0}: Error finding container f46822f967f3b67713b2f25757d023699ad723ad5288c2d15d844d908c545c94: Status 404 returned error can't find the container with id f46822f967f3b67713b2f25757d023699ad723ad5288c2d15d844d908c545c94 Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.175795 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.176720 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.177044 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.677027279 +0000 UTC m=+144.046937526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.182280 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.197570 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5ksg\" (UniqueName: \"kubernetes.io/projected/aa1d7187-eeb1-4145-8ba0-dd1e43023003-kube-api-access-x5ksg\") pod \"package-server-manager-789f6589d5-mg25x\" (UID: \"aa1d7187-eeb1-4145-8ba0-dd1e43023003\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.198192 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gbb\" (UniqueName: \"kubernetes.io/projected/f61d7ea6-37e5-47de-89e8-8a0dc1b895f9-kube-api-access-f7gbb\") pod \"service-ca-9c57cc56f-4jb99\" (UID: \"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.206651 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.219023 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.223149 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.232809 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-scnkp" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.234166 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf"] Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.239590 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.257589 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.258076 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.265805 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.272348 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-52zxd" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.276732 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.278270 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.278915 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.778873295 +0000 UTC m=+144.148783552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.382898 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.383785 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.883760452 +0000 UTC m=+144.253670709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.443073 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bj2bf"] Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.443117 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h"] Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.485669 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.485996 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:08.985983828 +0000 UTC m=+144.355894085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.495055 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.508918 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wwdht"] Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.516951 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n"] Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.540815 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" event={"ID":"c142d29b-ca43-49b7-8055-3175cdf9c45e","Type":"ContainerStarted","Data":"1799b4060a05f1ed7b73abfd6c9a938d2a59baf01655db5a69f70b4b95f0ace7"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.582292 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" event={"ID":"62b9975b-f28e-46de-89a0-bac3d2e7f927","Type":"ContainerStarted","Data":"b06f7aef078eef4b63f6c8683cb94ac06a872c79a767dccca059e02ae4e4d04f"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.587005 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.587229 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.087193858 +0000 UTC m=+144.457104125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.587546 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.588086 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.088073571 +0000 UTC m=+144.457983828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.589348 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" event={"ID":"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae","Type":"ContainerStarted","Data":"79c070cb5235142e9b262f19dbb8e16d5dcb7614ebc5d2dfdda6d8f430ca0ee1"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.591020 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" event={"ID":"f56094dd-41e6-41ed-9660-73cc0a3eb1ba","Type":"ContainerStarted","Data":"7d2922e3069bcc03e72eeb30ba7364f02c8649416551786a4d4e8647ed60e55d"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.608595 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" event={"ID":"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6","Type":"ContainerStarted","Data":"f627df2c5d97d7add678aae4bf30858e359e0469f937f2fabb4fa636467d2356"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.609118 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.609130 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" event={"ID":"606fa13b-30a4-412e-86eb-9fcb5bc8ebb6","Type":"ContainerStarted","Data":"e4f5cd5ea507d6e0d8275941863ff61ec41db09214e1b868e8f75c038346fd6d"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.616765 4931 patch_prober.go:28] interesting pod/console-operator-58897d9998-7jf2b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.616822 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" podUID="606fa13b-30a4-412e-86eb-9fcb5bc8ebb6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.633226 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" event={"ID":"3ac5359b-e653-4824-ad6f-4672970dc0cc","Type":"ContainerStarted","Data":"dd1a30db2d1332e33de6c4cb89d7a7c551f550f018fa31b21c5bdd572f28147d"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.633290 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" event={"ID":"3ac5359b-e653-4824-ad6f-4672970dc0cc","Type":"ContainerStarted","Data":"69234c254b9137bcdf8d0c9f00100926cca05090829b029106c26c2a7c2a25ed"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.639655 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" event={"ID":"6e653e21-3e72-4867-b39e-f374d752d503","Type":"ContainerStarted","Data":"f46822f967f3b67713b2f25757d023699ad723ad5288c2d15d844d908c545c94"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.644071 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" event={"ID":"e06ad469-0fb9-47d7-90fc-3c74ef8bb833","Type":"ContainerStarted","Data":"ef71aa85d04aec87e6ca79010343f337197cd9e8ae6a72d2abb1163b3b3464ee"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.646113 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" event={"ID":"9d9ce65b-1339-4198-ae4d-5697206eba5f","Type":"ContainerStarted","Data":"e0fdeaa06d83a5588d866d9e6750f39b2a25dbaddb13a9c0653d6a6f9ee971d5"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.646140 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" event={"ID":"9d9ce65b-1339-4198-ae4d-5697206eba5f","Type":"ContainerStarted","Data":"9961ae67bbba6ffb4a389a9a98f5f5e3812deeb1bf5bd9c07bd38ed20f591212"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.659500 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" event={"ID":"dbab60d9-c5df-4396-8012-94dc987f82c2","Type":"ContainerStarted","Data":"aa8e44e00db75d9aed75b6a5cf74cebb3f9c3c221b78d34bbcba3d9d8a15e25f"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.688926 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.689700 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.189681271 +0000 UTC m=+144.559591518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.690061 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.694062 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.194045316 +0000 UTC m=+144.563955573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.711499 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" event={"ID":"177d163e-7881-411f-a61b-a00e9c8bc9dc","Type":"ContainerStarted","Data":"da16aa93f1a7222f8c24df0184a32a7f6c88bc7e60e55df350f1794bb668ab0e"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.747717 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" event={"ID":"f10add70-0777-45cf-9555-7bda3b6ebeec","Type":"ContainerStarted","Data":"115185d2e989f25f8a64ef9857f8ea6b8d666b3ac9799afa22f6a808267acdca"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.761715 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-268mt" event={"ID":"21e8738e-651f-4f09-a052-1ff22028e3f3","Type":"ContainerStarted","Data":"bab7ff071b13604057a32853a504ba260840750838a51dfecb75902fa27e4a3d"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.761754 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-268mt" event={"ID":"21e8738e-651f-4f09-a052-1ff22028e3f3","Type":"ContainerStarted","Data":"5451d8c83316bfe9169f10fc93c4221f7918a21b39649b65fa46ec5547e8d5ea"} Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.778509 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.778578 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.795784 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.295754839 +0000 UTC m=+144.665665096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.831973 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.832533 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.834527 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.334507227 +0000 UTC m=+144.704417484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.940690 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:08 crc kubenswrapper[4931]: I0130 05:10:08.948025 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv"] Jan 30 05:10:08 crc kubenswrapper[4931]: E0130 05:10:08.949192 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.448957915 +0000 UTC m=+144.818868172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.040410 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.048618 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.049303 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.549277421 +0000 UTC m=+144.919187678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.095582 4931 csr.go:261] certificate signing request csr-f2dd6 is approved, waiting to be issued Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.104243 4931 csr.go:257] certificate signing request csr-f2dd6 is issued Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.150341 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.151010 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.650984674 +0000 UTC m=+145.020894931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.169657 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7242g" podStartSLOduration=122.169635404 podStartE2EDuration="2m2.169635404s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:09.133752641 +0000 UTC m=+144.503662898" watchObservedRunningTime="2026-01-30 05:10:09.169635404 +0000 UTC m=+144.539545661" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.240303 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" podStartSLOduration=122.240288411 podStartE2EDuration="2m2.240288411s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:09.240005644 +0000 UTC m=+144.609915901" watchObservedRunningTime="2026-01-30 05:10:09.240288411 +0000 UTC m=+144.610198668" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.258906 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.259250 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.759234979 +0000 UTC m=+145.129145236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.364241 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.364929 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.864891536 +0000 UTC m=+145.234801793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.466274 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.466877 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:09.966855025 +0000 UTC m=+145.336765282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.525951 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-k9mcd" podStartSLOduration=122.525930438 podStartE2EDuration="2m2.525930438s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:09.465159901 +0000 UTC m=+144.835070158" watchObservedRunningTime="2026-01-30 05:10:09.525930438 +0000 UTC m=+144.895840695" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.527148 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wwbmr" podStartSLOduration=122.5271423 podStartE2EDuration="2m2.5271423s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:09.495829517 +0000 UTC m=+144.865739774" watchObservedRunningTime="2026-01-30 05:10:09.5271423 +0000 UTC m=+144.897052557" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.545139 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:09 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:09 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:09 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.556471 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.580004 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.583132 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.583606 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.083589523 +0000 UTC m=+145.453499780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.611224 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tbgzs"] Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.664883 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5"] Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.685323 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.685776 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.185757988 +0000 UTC m=+145.555668245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.727240 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d7ptq" podStartSLOduration=122.727224168 podStartE2EDuration="2m2.727224168s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:09.670027545 +0000 UTC m=+145.039937802" watchObservedRunningTime="2026-01-30 05:10:09.727224168 +0000 UTC m=+145.097134425" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.761046 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phq4q"] Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.788586 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.789306 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.289222167 +0000 UTC m=+145.659132424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.893632 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" event={"ID":"dbab60d9-c5df-4396-8012-94dc987f82c2","Type":"ContainerStarted","Data":"820960a390e9f5b8227dfef695af181ec6a4c3f54f9101efd319215772d57fb3"} Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.893774 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:09 crc kubenswrapper[4931]: E0130 05:10:09.902211 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.402187255 +0000 UTC m=+145.772097512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.928755 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" event={"ID":"107d8fb1-31b1-4bec-8d55-a27e312609b1","Type":"ContainerStarted","Data":"716f19dbae5782cb4bb4e2a3002de9af3be27e8955667fd2b4ee2a8d8c79f6a8"} Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.944579 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" podStartSLOduration=122.944543398 podStartE2EDuration="2m2.944543398s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:09.853160197 +0000 UTC m=+145.223070454" watchObservedRunningTime="2026-01-30 05:10:09.944543398 +0000 UTC m=+145.314453655" Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.950587 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lmnvn"] Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.950812 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq"] Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.955220 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" event={"ID":"6e653e21-3e72-4867-b39e-f374d752d503","Type":"ContainerStarted","Data":"3e97cf20fee7a8088c18e6fa6083b1c2b0aab0b86b2bc7ea7af6b798dd49835a"} Jan 30 05:10:09 crc kubenswrapper[4931]: I0130 05:10:09.999022 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.000440 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.500395666 +0000 UTC m=+145.870305933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.016431 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4phnt"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.032218 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-scnkp" event={"ID":"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e","Type":"ContainerStarted","Data":"445c7e5ffc93b3b0d06e6d127c8972a85e3ee078da31be9ad50effa715e5b3ac"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.040876 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv"] Jan 30 05:10:10 crc kubenswrapper[4931]: W0130 05:10:10.064683 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb786d0_4d5f_4e2b_9f2b_5e17ef6c77ce.slice/crio-482cd8859960b178149275312ff118e048b059056f78cadc273858d76110df6d WatchSource:0}: Error finding container 482cd8859960b178149275312ff118e048b059056f78cadc273858d76110df6d: Status 404 returned error can't find the container with id 482cd8859960b178149275312ff118e048b059056f78cadc273858d76110df6d Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.064711 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" event={"ID":"25a99ace-2c29-419e-b5de-3f11b024ee43","Type":"ContainerStarted","Data":"10f5f0d13e9b86ef0a8f012d8f44236140548b8a7f53c5a261e0fe97cdc26db0"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.072945 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.082799 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" event={"ID":"f10add70-0777-45cf-9555-7bda3b6ebeec","Type":"ContainerStarted","Data":"209a73f85ef03f2eb1edec0c2fa0de11a35ef571d557b7d655a600b5c6a9324c"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.086825 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" podStartSLOduration=123.086805967 podStartE2EDuration="2m3.086805967s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.080519191 +0000 UTC m=+145.450429448" watchObservedRunningTime="2026-01-30 05:10:10.086805967 +0000 UTC m=+145.456716224" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.098220 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4jb99"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.100932 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.102386 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.602371116 +0000 UTC m=+145.972281383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.103604 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" event={"ID":"c142d29b-ca43-49b7-8055-3175cdf9c45e","Type":"ContainerStarted","Data":"88590e8dc9b979b0c062e5a0650bc2c8f187d7e714f08173a54d052c271e8fff"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.109904 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tbgzs" event={"ID":"fc8b1aac-27e5-4f8c-a329-821c231fb7c6","Type":"ContainerStarted","Data":"145c6fd52710339cbfd1c98ffcf108d9c1e58dde8fadd16261aca094cd0f7307"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.110117 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 05:05:09 +0000 UTC, rotation deadline is 2026-11-18 11:46:15.463168017 +0000 UTC Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.110161 4931 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7014h36m5.353010847s for next certificate rotation Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.113624 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" event={"ID":"8158446c-5883-48ad-86da-77db470d8214","Type":"ContainerStarted","Data":"1812025aee5a6e426ec2a0e855618d2c9b3bb33f38fe5c2b8477da74912e19f2"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.113671 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" event={"ID":"8158446c-5883-48ad-86da-77db470d8214","Type":"ContainerStarted","Data":"822d7ccf9f3b9fabf66dcaa23268833edba0a9536c78d90a05de6cc7d5a9a209"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.151120 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" event={"ID":"63f183f0-7d5c-45c9-88a3-df19bd214439","Type":"ContainerStarted","Data":"e002421e9d8e8d6989b65ce72814617e8cc28645ffbc4c9400ef49d370809f39"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.165544 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ff4lr" podStartSLOduration=123.165519045 podStartE2EDuration="2m3.165519045s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.131169833 +0000 UTC m=+145.501080090" watchObservedRunningTime="2026-01-30 05:10:10.165519045 +0000 UTC m=+145.535429302" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.166607 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6wmnm" podStartSLOduration=123.166603154 podStartE2EDuration="2m3.166603154s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.163560984 +0000 UTC m=+145.533471241" watchObservedRunningTime="2026-01-30 05:10:10.166603154 +0000 UTC m=+145.536513401" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.188792 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" event={"ID":"a34eba67-19a6-4d4e-a902-9482b2847199","Type":"ContainerStarted","Data":"bc036db765cc587e68b8a1d1d21f2ceab16d55e250cf3caa914ef5d9d19cf60b"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.205460 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.207639 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.707607421 +0000 UTC m=+146.077517678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.208655 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" podStartSLOduration=123.208635208 podStartE2EDuration="2m3.208635208s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.207903099 +0000 UTC m=+145.577813356" watchObservedRunningTime="2026-01-30 05:10:10.208635208 +0000 UTC m=+145.578545465" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.242625 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" event={"ID":"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a","Type":"ContainerStarted","Data":"a734f39440505ad6e36a5797c94de63391f20cf23b700fb30ed9c24d0e76f90b"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.243035 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" event={"ID":"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a","Type":"ContainerStarted","Data":"5c8894ede0b984d84cb2316d3e83205a96dd5b47e01b5cf55a18aebaac856129"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.274292 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" event={"ID":"f56094dd-41e6-41ed-9660-73cc0a3eb1ba","Type":"ContainerStarted","Data":"352b58dbbb3a109a1f739924167256b34bcddcdd723c52b0a9dc82720d06d21a"} Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.266178 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-268mt" podStartSLOduration=123.26614352 podStartE2EDuration="2m3.26614352s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.260568923 +0000 UTC m=+145.630479180" watchObservedRunningTime="2026-01-30 05:10:10.26614352 +0000 UTC m=+145.636053777" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.318765 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sh5fl" podStartSLOduration=123.318740832 podStartE2EDuration="2m3.318740832s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.309472788 +0000 UTC m=+145.679383045" watchObservedRunningTime="2026-01-30 05:10:10.318740832 +0000 UTC m=+145.688651089" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.343359 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.346377 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.846349037 +0000 UTC m=+146.216259294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.406406 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.465799 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.471800 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.971754263 +0000 UTC m=+146.341664520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.472420 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.482354 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:10.982334801 +0000 UTC m=+146.352245058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.524445 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7jf2b" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.545262 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" podStartSLOduration=123.545238874 podStartE2EDuration="2m3.545238874s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.544755072 +0000 UTC m=+145.914665319" watchObservedRunningTime="2026-01-30 05:10:10.545238874 +0000 UTC m=+145.915149131" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.553963 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:10 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:10 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:10 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.554225 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.586020 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.586394 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.086377575 +0000 UTC m=+146.456287822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.587369 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.688802 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.689437 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.689848 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.189830454 +0000 UTC m=+146.559740711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.711682 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.721177 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2d7cc" podStartSLOduration=123.721156037 podStartE2EDuration="2m3.721156037s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.713163817 +0000 UTC m=+146.083074074" watchObservedRunningTime="2026-01-30 05:10:10.721156037 +0000 UTC m=+146.091066304" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.721489 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.721558 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.771878 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-52zxd"] Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.793442 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.794112 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.294085024 +0000 UTC m=+146.663995281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.876275 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-scnkp" podStartSLOduration=6.876255653 podStartE2EDuration="6.876255653s" podCreationTimestamp="2026-01-30 05:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:10.82475072 +0000 UTC m=+146.194660977" watchObservedRunningTime="2026-01-30 05:10:10.876255653 +0000 UTC m=+146.246165910" Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.901941 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:10 crc kubenswrapper[4931]: E0130 05:10:10.902605 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.402591615 +0000 UTC m=+146.772501872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:10 crc kubenswrapper[4931]: I0130 05:10:10.906944 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5"] Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.003272 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.003601 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.5035862 +0000 UTC m=+146.873496457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.112173 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.113081 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.613062537 +0000 UTC m=+146.982972794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.216005 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.216413 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.716389392 +0000 UTC m=+147.086299649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.248947 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.249023 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.317483 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.317957 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.817931421 +0000 UTC m=+147.187841678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.319916 4931 generic.go:334] "Generic (PLEG): container finished" podID="8158446c-5883-48ad-86da-77db470d8214" containerID="1812025aee5a6e426ec2a0e855618d2c9b3bb33f38fe5c2b8477da74912e19f2" exitCode=0 Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.320003 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" event={"ID":"8158446c-5883-48ad-86da-77db470d8214","Type":"ContainerDied","Data":"1812025aee5a6e426ec2a0e855618d2c9b3bb33f38fe5c2b8477da74912e19f2"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.348266 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" event={"ID":"63899a5c-b7ca-4eca-8418-c6b9bc1f774b","Type":"ContainerStarted","Data":"59eee242f13f592326eea47fc27c9fd67f269e9cd32e7b7f01d4b893c11049a9"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.351096 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" event={"ID":"107d8fb1-31b1-4bec-8d55-a27e312609b1","Type":"ContainerStarted","Data":"5d2207b458875d686ff369d91fae7714d79df423c53512b03ba6936ad5acd414"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.352818 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4phnt" event={"ID":"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce","Type":"ContainerStarted","Data":"482cd8859960b178149275312ff118e048b059056f78cadc273858d76110df6d"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.354069 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" event={"ID":"6e653e21-3e72-4867-b39e-f374d752d503","Type":"ContainerStarted","Data":"a69e38bc4425d484d14a5059496d9d463cff1bcddabaaac9699d49e4b73aba83"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.355860 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" event={"ID":"1a8f99a6-f163-4720-8eb4-bc8607753d79","Type":"ContainerStarted","Data":"925d7ec4214d424008eeb73fc8925f29c574b109b85902152a8bda78b7583feb"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.359935 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" event={"ID":"8ec2553d-d0b3-4b15-a42c-73c1c25ea70f","Type":"ContainerStarted","Data":"fd397060ee56100ef6da607b98736ba790b1563d7c84b4359327aa8ee3ae9f10"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.359978 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" event={"ID":"8ec2553d-d0b3-4b15-a42c-73c1c25ea70f","Type":"ContainerStarted","Data":"1f28c3132fe229940a650d73adad8e0bfdbf06169436b7fd464ed91fccfd74e7"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.361108 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" event={"ID":"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9","Type":"ContainerStarted","Data":"39b7ca9ff298e89c9df90acb10a020ce42cc74ac13632078bd1f4f834ab339bc"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.362187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" event={"ID":"63f183f0-7d5c-45c9-88a3-df19bd214439","Type":"ContainerStarted","Data":"9affac49170976fe65722d58557fc4f244e6828877b8729010551b959b08d5c9"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.402828 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" event={"ID":"6db474c1-8489-43b1-bb9e-0961f9dc1dc4","Type":"ContainerStarted","Data":"f4ba6213b720b7ea748f17ee2d070c0bd967c6e08ffb2402373a84d10b51951d"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.414459 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8568" podStartSLOduration=124.414442767 podStartE2EDuration="2m4.414442767s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.41381245 +0000 UTC m=+146.783722707" watchObservedRunningTime="2026-01-30 05:10:11.414442767 +0000 UTC m=+146.784353034" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.421911 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.422035 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.922004256 +0000 UTC m=+147.291914513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.422342 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.424807 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:11.924769578 +0000 UTC m=+147.294679825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.448328 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zfq7h" podStartSLOduration=124.448297567 podStartE2EDuration="2m4.448297567s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.444312832 +0000 UTC m=+146.814223079" watchObservedRunningTime="2026-01-30 05:10:11.448297567 +0000 UTC m=+146.818207824" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.454041 4931 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-rxvzv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.454111 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" podUID="eb3c187d-243f-457f-b419-e02a6898fd48" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.458932 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.459092 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" event={"ID":"25a99ace-2c29-419e-b5de-3f11b024ee43","Type":"ContainerStarted","Data":"7750cd40c9d0fef2887b1b482623cb1d16e81114d992d1c633d524553a28371b"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.459187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" event={"ID":"eb3c187d-243f-457f-b419-e02a6898fd48","Type":"ContainerStarted","Data":"ee2d2bbfa2ca31e3fae4c57d3281b0161094c389f759f49d53b90f5ff089bd43"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.459269 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" event={"ID":"eb3c187d-243f-457f-b419-e02a6898fd48","Type":"ContainerStarted","Data":"a368aa881a65adedde7ec8125ad717cc3ae5d122ad48225bd395fa63650713b9"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.461854 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" event={"ID":"a34eba67-19a6-4d4e-a902-9482b2847199","Type":"ContainerStarted","Data":"fd359ba9fc381b7cbc57fbce72975e41f8adaa3bec535178a56e40ff760815f8"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.480207 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" event={"ID":"0071e157-d4b7-40ac-8a50-35f9b7aa961d","Type":"ContainerStarted","Data":"7308689e3f6f70e433937466dfc6cc4f9177148c72528bafe42f5064a1d27153"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.499932 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" event={"ID":"bc314d0c-da50-4607-93e1-5bece9c3b2b1","Type":"ContainerStarted","Data":"dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.499994 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" event={"ID":"bc314d0c-da50-4607-93e1-5bece9c3b2b1","Type":"ContainerStarted","Data":"b8dffc3066e9941e3da7e55a7eddcae34aa88188f6b968755e41658b1568e4e5"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.500599 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.508703 4931 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phq4q container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.509250 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" podUID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.518906 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" podStartSLOduration=124.518879881 podStartE2EDuration="2m4.518879881s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.507212715 +0000 UTC m=+146.877122972" watchObservedRunningTime="2026-01-30 05:10:11.518879881 +0000 UTC m=+146.888790138" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.522815 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" event={"ID":"48aa89b7-0ab1-432b-a693-2d56358c1d83","Type":"ContainerStarted","Data":"c165f120e7002c7e49b3792b8faf5ee00cdc3fdbe72c9edf9357bcf5ce3731be"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.524821 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.525229 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.025195467 +0000 UTC m=+147.395105724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.525566 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.528337 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.02832058 +0000 UTC m=+147.398230837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.548679 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:11 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:11 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:11 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.548752 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.557967 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" event={"ID":"d11d03bb-be3f-43b0-a59b-5d9fde1c9717","Type":"ContainerStarted","Data":"a4faf5c3f72c32004e96c239a968666917ad28090767d94c5c11fcfcc4c18347"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.577207 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52zxd" event={"ID":"6b53f53c-0afe-4574-aebc-64c6d81f10d9","Type":"ContainerStarted","Data":"6669c848f50e93107ea95a01f183003cc9cb854e8fd028c528fa6d1d8ba7376b"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.589934 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" event={"ID":"e521b474-9f29-4841-a365-ed1589358607","Type":"ContainerStarted","Data":"c9d349808c713379d952f12dbbedb3a0d7d0da509b01a2b7acc95be1929e3316"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.627090 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.627729 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" event={"ID":"f10add70-0777-45cf-9555-7bda3b6ebeec","Type":"ContainerStarted","Data":"97145adefbb638d94813a3bee5af6eeadd1dc166f942eff4d9310576e91c3244"} Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.628098 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.128069961 +0000 UTC m=+147.497980218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.666570 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" event={"ID":"907e32f6-7e41-43fb-862c-c6a5f835ff73","Type":"ContainerStarted","Data":"1562adb50e57a2a6211768fe7a2f6194dcb99bf742d732ddb537d7970604e0d0"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.666651 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" event={"ID":"907e32f6-7e41-43fb-862c-c6a5f835ff73","Type":"ContainerStarted","Data":"c2fed5e9ad93508252bfa0c1fc29fb823dc594f4543551f2f703f98b8bb963e3"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.673859 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wqjcv" podStartSLOduration=124.673832194 podStartE2EDuration="2m4.673832194s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.621278403 +0000 UTC m=+146.991188660" watchObservedRunningTime="2026-01-30 05:10:11.673832194 +0000 UTC m=+147.043742451" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.680375 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" podStartSLOduration=124.680348685 podStartE2EDuration="2m4.680348685s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.673002382 +0000 UTC m=+147.042912639" watchObservedRunningTime="2026-01-30 05:10:11.680348685 +0000 UTC m=+147.050258942" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.697681 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" event={"ID":"c142d29b-ca43-49b7-8055-3175cdf9c45e","Type":"ContainerStarted","Data":"9373e7e356b5a3ebfb940bf62868db4278dac782cad114e8ac3779c0b8a1f774"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.705198 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sc4j" podStartSLOduration=124.705175547 podStartE2EDuration="2m4.705175547s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.701614734 +0000 UTC m=+147.071524981" watchObservedRunningTime="2026-01-30 05:10:11.705175547 +0000 UTC m=+147.075085804" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.729113 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" event={"ID":"aa1d7187-eeb1-4145-8ba0-dd1e43023003","Type":"ContainerStarted","Data":"d2b39db5352926f4d329e0ddd8917b1c0b2fdc23bf2a28b5d899d0d3de638c90"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.730592 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.730946 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.230931784 +0000 UTC m=+147.600842041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.786245 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" podStartSLOduration=124.786213497 podStartE2EDuration="2m4.786213497s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.752294966 +0000 UTC m=+147.122205223" watchObservedRunningTime="2026-01-30 05:10:11.786213497 +0000 UTC m=+147.156123754" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.793640 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-scnkp" event={"ID":"b5d96c05-25a8-4a0b-b6a0-9ca3781ac12e","Type":"ContainerStarted","Data":"50ce1bb5d9c49d5eba29fd94db26330747effdd341a08b35e1f7dc00a045f5b2"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.813984 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" event={"ID":"f6fa9de7-6e76-472c-99c8-51a6c52eb6ae","Type":"ContainerStarted","Data":"24f84e9f6129c86faf1cceef9013ed3cdcc829143086f12ad6b2fb569b6d0d96"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.833179 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.833583 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.333568841 +0000 UTC m=+147.703479098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.839327 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dzzxq" podStartSLOduration=124.839300372 podStartE2EDuration="2m4.839300372s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.788687572 +0000 UTC m=+147.158597829" watchObservedRunningTime="2026-01-30 05:10:11.839300372 +0000 UTC m=+147.209210619" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.865584 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" event={"ID":"0fbcd41f-dfc6-4581-9f07-74e0efea3f1a","Type":"ContainerStarted","Data":"da6724cbad15f9c08591bbc95a5a2ca42ca54ae8221777df9cf685d7f5d33b58"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.866552 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-r62wb" podStartSLOduration=124.866530808 podStartE2EDuration="2m4.866530808s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.838765388 +0000 UTC m=+147.208675645" watchObservedRunningTime="2026-01-30 05:10:11.866530808 +0000 UTC m=+147.236441065" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.876255 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tbgzs" event={"ID":"fc8b1aac-27e5-4f8c-a329-821c231fb7c6","Type":"ContainerStarted","Data":"331b6a7b0f00f9612e135077c6cae626b67f9cf29a228aaf37d7c4c26d10810d"} Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.877535 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tbgzs" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.888665 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbgzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.888742 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tbgzs" podUID="fc8b1aac-27e5-4f8c-a329-821c231fb7c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.916885 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l98zf" podStartSLOduration=124.91686035 podStartE2EDuration="2m4.91686035s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.902352399 +0000 UTC m=+147.272262656" watchObservedRunningTime="2026-01-30 05:10:11.91686035 +0000 UTC m=+147.286770607" Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.940082 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:11 crc kubenswrapper[4931]: E0130 05:10:11.942165 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.442134085 +0000 UTC m=+147.812044342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:11 crc kubenswrapper[4931]: I0130 05:10:11.993511 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5c95n" podStartSLOduration=124.993493004 podStartE2EDuration="2m4.993493004s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:11.981842018 +0000 UTC m=+147.351752275" watchObservedRunningTime="2026-01-30 05:10:11.993493004 +0000 UTC m=+147.363403261" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.046623 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.048376 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.548358516 +0000 UTC m=+147.918268773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.151416 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.151837 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.651823095 +0000 UTC m=+148.021733352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.200160 4931 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8ndkb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]log ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]etcd ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/max-in-flight-filter ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 05:10:12 crc kubenswrapper[4931]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 05:10:12 crc kubenswrapper[4931]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/openshift.io-startinformers ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 05:10:12 crc kubenswrapper[4931]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 05:10:12 crc kubenswrapper[4931]: livez check failed Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.200210 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" podUID="dbab60d9-c5df-4396-8012-94dc987f82c2" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.257675 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.757644266 +0000 UTC m=+148.127554523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.257784 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.258542 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.258978 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.758970691 +0000 UTC m=+148.128880938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.361837 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.362257 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.862240685 +0000 UTC m=+148.232150942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.463670 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.463987 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:12.963975659 +0000 UTC m=+148.333885916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.549887 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:12 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:12 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:12 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.550316 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.564828 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.565258 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.06523942 +0000 UTC m=+148.435149677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.666878 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.667259 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.167247801 +0000 UTC m=+148.537158058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.768485 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.768877 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.268861021 +0000 UTC m=+148.638771278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.870281 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.870592 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.370581064 +0000 UTC m=+148.740491321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.890664 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" event={"ID":"f61d7ea6-37e5-47de-89e8-8a0dc1b895f9","Type":"ContainerStarted","Data":"070a6faeedaa98e2ef6e21e22031a225b72c05425b2cc201d937d490c1768656"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.894252 4931 generic.go:334] "Generic (PLEG): container finished" podID="25a99ace-2c29-419e-b5de-3f11b024ee43" containerID="7750cd40c9d0fef2887b1b482623cb1d16e81114d992d1c633d524553a28371b" exitCode=0 Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.894298 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" event={"ID":"25a99ace-2c29-419e-b5de-3f11b024ee43","Type":"ContainerDied","Data":"7750cd40c9d0fef2887b1b482623cb1d16e81114d992d1c633d524553a28371b"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.894359 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" event={"ID":"25a99ace-2c29-419e-b5de-3f11b024ee43","Type":"ContainerStarted","Data":"d0ebfb4796177ce5d2f97225b61917883686ba7754e5abf03dcc7723817f548c"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.896708 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-52zxd" event={"ID":"6b53f53c-0afe-4574-aebc-64c6d81f10d9","Type":"ContainerStarted","Data":"26ed8b69c544c6a6a40d832c21f3653e3c1ee58e9c255975345fd23ae00e97fb"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.898794 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" event={"ID":"0071e157-d4b7-40ac-8a50-35f9b7aa961d","Type":"ContainerStarted","Data":"3e80aa8483d5c39e31d431dfff2b17161d28f87a2bff18f79bf6df5b318dbd66"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.901415 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" event={"ID":"6db474c1-8489-43b1-bb9e-0961f9dc1dc4","Type":"ContainerStarted","Data":"966a8b56a0cd29c0d03de4d675353b24308f545b212e5aee4d14328c6df6b7bb"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.901471 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" event={"ID":"6db474c1-8489-43b1-bb9e-0961f9dc1dc4","Type":"ContainerStarted","Data":"8c8724012f411472cde1d05efea748fd76c75afc1a34222827cc887796d90a7b"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.905486 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4phnt" event={"ID":"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce","Type":"ContainerStarted","Data":"c887ef4525a06db76b0777a24ea390f5605ebdb97a053a363a216b631473280a"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.905515 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4phnt" event={"ID":"fdb786d0-4d5f-4e2b-9f2b-5e17ef6c77ce","Type":"ContainerStarted","Data":"905f34b53355a70cfee7d2120983e52950c9a688eec5e904ffe4eb9a43173c56"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.905857 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.907597 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" event={"ID":"63899a5c-b7ca-4eca-8418-c6b9bc1f774b","Type":"ContainerStarted","Data":"2f0546229f980ec7b9a46854704bd46525b35dcd6613e96f66c311941bdb1380"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.908287 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.911046 4931 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gx6j5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.911093 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" podUID="63899a5c-b7ca-4eca-8418-c6b9bc1f774b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.911378 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" event={"ID":"48aa89b7-0ab1-432b-a693-2d56358c1d83","Type":"ContainerStarted","Data":"f41be65f196765acfd7674543e25034e5f5217f9a43df82a2d1cc2cc2d6a6170"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.911956 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.913559 4931 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-n4fvr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.913588 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" podUID="48aa89b7-0ab1-432b-a693-2d56358c1d83" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.915182 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" event={"ID":"aa1d7187-eeb1-4145-8ba0-dd1e43023003","Type":"ContainerStarted","Data":"4ec4b05f8b1f9a05c8644b6183b1ae9bf0757c2f973729ade9c9ffa179f141ea"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.915211 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" event={"ID":"aa1d7187-eeb1-4145-8ba0-dd1e43023003","Type":"ContainerStarted","Data":"f2d2d2c1d5060878fd889c6c9440b533b28c30e2d9cd909a1261840854e2c3ca"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.915532 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.918139 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" event={"ID":"e521b474-9f29-4841-a365-ed1589358607","Type":"ContainerStarted","Data":"dfcd024b2f504a863bbcbc60457cb93ebf2ec2f189b4fb99017279711c4d374b"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.919976 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" event={"ID":"8ec2553d-d0b3-4b15-a42c-73c1c25ea70f","Type":"ContainerStarted","Data":"ace69891c41d64e610c291a77fcc35e2d208c29663c6aee22b44205b825605f7"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.921909 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6pfwg" event={"ID":"d11d03bb-be3f-43b0-a59b-5d9fde1c9717","Type":"ContainerStarted","Data":"95f6ce1bc4e15c222b9fce15112cc0e0076f6e1ee9a889d3822f17d045d274f0"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.923226 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" event={"ID":"1a8f99a6-f163-4720-8eb4-bc8607753d79","Type":"ContainerStarted","Data":"76f686a64b7bcba52e9cf572d78b41631a5873f435dbdf098126fe32ac5ccc3f"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.928565 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" event={"ID":"8158446c-5883-48ad-86da-77db470d8214","Type":"ContainerStarted","Data":"b8ee44b302d783cba73cd5cb20413829996b68d6d940ca0f42328230d40f3f4e"} Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.928592 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.931353 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbgzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.931401 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tbgzs" podUID="fc8b1aac-27e5-4f8c-a329-821c231fb7c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.931463 4931 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phq4q container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.931475 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" podUID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.952931 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tbgzs" podStartSLOduration=125.952915618 podStartE2EDuration="2m5.952915618s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:12.05232677 +0000 UTC m=+147.422237027" watchObservedRunningTime="2026-01-30 05:10:12.952915618 +0000 UTC m=+148.322825875" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.970274 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rxvzv" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.971189 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.971383 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.471357103 +0000 UTC m=+148.841267360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.971471 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:12 crc kubenswrapper[4931]: E0130 05:10:12.971926 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.471903657 +0000 UTC m=+148.841813914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.994157 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4jb99" podStartSLOduration=125.994136891 podStartE2EDuration="2m5.994136891s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:12.956725488 +0000 UTC m=+148.326635745" watchObservedRunningTime="2026-01-30 05:10:12.994136891 +0000 UTC m=+148.364047138" Jan 30 05:10:12 crc kubenswrapper[4931]: I0130 05:10:12.996536 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" podStartSLOduration=125.996530264 podStartE2EDuration="2m5.996530264s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:12.995195519 +0000 UTC m=+148.365105776" watchObservedRunningTime="2026-01-30 05:10:12.996530264 +0000 UTC m=+148.366440521" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.021673 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.021752 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.023927 4931 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-pspt5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.024018 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" podUID="25a99ace-2c29-419e-b5de-3f11b024ee43" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.027693 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" podStartSLOduration=126.027679073 podStartE2EDuration="2m6.027679073s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.026525622 +0000 UTC m=+148.396435889" watchObservedRunningTime="2026-01-30 05:10:13.027679073 +0000 UTC m=+148.397589330" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.048995 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-84v42" podStartSLOduration=126.048977722 podStartE2EDuration="2m6.048977722s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.047694019 +0000 UTC m=+148.417604276" watchObservedRunningTime="2026-01-30 05:10:13.048977722 +0000 UTC m=+148.418887979" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.073174 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.073376 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.573348623 +0000 UTC m=+148.943258880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.076078 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.079279 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.579261068 +0000 UTC m=+148.949171325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.112818 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gb6js" podStartSLOduration=126.11279368 podStartE2EDuration="2m6.11279368s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.081204009 +0000 UTC m=+148.451114266" watchObservedRunningTime="2026-01-30 05:10:13.11279368 +0000 UTC m=+148.482703937" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.114908 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" podStartSLOduration=126.114901115 podStartE2EDuration="2m6.114901115s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.107938332 +0000 UTC m=+148.477848579" watchObservedRunningTime="2026-01-30 05:10:13.114901115 +0000 UTC m=+148.484811372" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.145056 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4phnt" podStartSLOduration=8.145037327 podStartE2EDuration="8.145037327s" podCreationTimestamp="2026-01-30 05:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.141849783 +0000 UTC m=+148.511760040" watchObservedRunningTime="2026-01-30 05:10:13.145037327 +0000 UTC m=+148.514947584" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.179374 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.179610 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.179663 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.179689 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.179720 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.179759 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.679734119 +0000 UTC m=+149.049644366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.179806 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.180201 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.680191281 +0000 UTC m=+149.050101538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.185338 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lmnvn" podStartSLOduration=126.185311865 podStartE2EDuration="2m6.185311865s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.165800543 +0000 UTC m=+148.535710810" watchObservedRunningTime="2026-01-30 05:10:13.185311865 +0000 UTC m=+148.555222112" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.185553 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" podStartSLOduration=126.185549022 podStartE2EDuration="2m6.185549022s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.179556474 +0000 UTC m=+148.549466731" watchObservedRunningTime="2026-01-30 05:10:13.185549022 +0000 UTC m=+148.555459279" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.185882 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.192477 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.198281 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.199235 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.200963 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.223560 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-52zxd" podStartSLOduration=8.22354272 podStartE2EDuration="8.22354272s" podCreationTimestamp="2026-01-30 05:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.203999316 +0000 UTC m=+148.573909593" watchObservedRunningTime="2026-01-30 05:10:13.22354272 +0000 UTC m=+148.593452967" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.252853 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" podStartSLOduration=126.25283334 podStartE2EDuration="2m6.25283334s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.249001469 +0000 UTC m=+148.618911726" watchObservedRunningTime="2026-01-30 05:10:13.25283334 +0000 UTC m=+148.622743597" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.287599 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.288198 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.788175069 +0000 UTC m=+149.158085326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.292626 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" podStartSLOduration=126.292598905 podStartE2EDuration="2m6.292598905s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.28935416 +0000 UTC m=+148.659264417" watchObservedRunningTime="2026-01-30 05:10:13.292598905 +0000 UTC m=+148.662509162" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.348096 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-w2zzb" podStartSLOduration=126.348068443 podStartE2EDuration="2m6.348068443s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:13.317877589 +0000 UTC m=+148.687787846" watchObservedRunningTime="2026-01-30 05:10:13.348068443 +0000 UTC m=+148.717978700" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.390235 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.390662 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.890645802 +0000 UTC m=+149.260556059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.453127 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.482731 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.490857 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.491105 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.99107473 +0000 UTC m=+149.360984987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.491382 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.491822 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:13.991815609 +0000 UTC m=+149.361725866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.566758 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:13 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:13 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:13 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.566841 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.595859 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.596190 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.096168612 +0000 UTC m=+149.466078869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.702527 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.702957 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.202944878 +0000 UTC m=+149.572855135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: W0130 05:10:13.802261 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f5a040cc9674363783d069aa1b9802e19208315ea912837e33dd68f24600a4f4 WatchSource:0}: Error finding container f5a040cc9674363783d069aa1b9802e19208315ea912837e33dd68f24600a4f4: Status 404 returned error can't find the container with id f5a040cc9674363783d069aa1b9802e19208315ea912837e33dd68f24600a4f4 Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.803978 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.804110 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.304082656 +0000 UTC m=+149.673992913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.804581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.805204 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.305179355 +0000 UTC m=+149.675089612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.838087 4931 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.905703 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:13 crc kubenswrapper[4931]: E0130 05:10:13.905972 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.405955803 +0000 UTC m=+149.775866060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.945529 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f5a040cc9674363783d069aa1b9802e19208315ea912837e33dd68f24600a4f4"} Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.958935 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" event={"ID":"107d8fb1-31b1-4bec-8d55-a27e312609b1","Type":"ContainerStarted","Data":"dcc07b917752e4701b33de49bd4c70afad023ba346e2b08fbb6a3ea298f25922"} Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.958983 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" event={"ID":"107d8fb1-31b1-4bec-8d55-a27e312609b1","Type":"ContainerStarted","Data":"939ca2e71ca9511bbb2e4bd38be07de31a2e1d98b0e0172601c8b9718a391a15"} Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.963454 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbgzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.963538 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tbgzs" podUID="fc8b1aac-27e5-4f8c-a329-821c231fb7c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 05:10:13 crc kubenswrapper[4931]: I0130 05:10:13.977501 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n4fvr" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.010088 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.013227 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.513191171 +0000 UTC m=+149.883101418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.117902 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.118518 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.618495089 +0000 UTC m=+149.988405346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.219869 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.220725 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.720704835 +0000 UTC m=+150.090615092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.320790 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.321250 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.821221656 +0000 UTC m=+150.191131913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.371298 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5fcn"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.372731 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.405479 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.424739 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-utilities\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.424790 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6xd\" (UniqueName: \"kubernetes.io/projected/9163b44e-4aa5-422c-a2fd-55747c8d506e-kube-api-access-zg6xd\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.424862 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.424887 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-catalog-content\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.425340 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:14.925325562 +0000 UTC m=+150.295235819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.493908 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5fcn"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.525944 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.526475 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-catalog-content\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.526567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-utilities\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.526594 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6xd\" (UniqueName: \"kubernetes.io/projected/9163b44e-4aa5-422c-a2fd-55747c8d506e-kube-api-access-zg6xd\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.527236 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:15.02721168 +0000 UTC m=+150.397121937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.527792 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-catalog-content\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.528083 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-utilities\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.550237 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:14 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:14 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:14 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.550313 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.566458 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-frnwj"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.569062 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frnwj"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.569171 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.574129 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.595587 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6xd\" (UniqueName: \"kubernetes.io/projected/9163b44e-4aa5-422c-a2fd-55747c8d506e-kube-api-access-zg6xd\") pod \"community-operators-k5fcn\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.636557 4931 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T05:10:13.83811093Z","Handler":null,"Name":""} Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.637539 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-catalog-content\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.637633 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.637686 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6cpz\" (UniqueName: \"kubernetes.io/projected/0dbdc3df-7306-41e4-93c6-d7d27d481789-kube-api-access-z6cpz\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.637722 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-utilities\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.638205 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 05:10:15.138186826 +0000 UTC m=+150.508097083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f8zg7" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.730484 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gx6j5" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.743079 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.743267 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6cpz\" (UniqueName: \"kubernetes.io/projected/0dbdc3df-7306-41e4-93c6-d7d27d481789-kube-api-access-z6cpz\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.743295 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-utilities\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.743355 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-catalog-content\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: E0130 05:10:14.743824 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 05:10:15.243799212 +0000 UTC m=+150.613709469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.743892 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-catalog-content\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.744212 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-utilities\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.750657 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4j7wh"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.751712 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.780243 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6cpz\" (UniqueName: \"kubernetes.io/projected/0dbdc3df-7306-41e4-93c6-d7d27d481789-kube-api-access-z6cpz\") pod \"certified-operators-frnwj\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.780747 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j7wh"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.785070 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.803122 4931 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.803169 4931 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.851363 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5brk\" (UniqueName: \"kubernetes.io/projected/39e99a4f-8956-424c-a4c6-7a67f9983cd0-kube-api-access-v5brk\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.851482 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-utilities\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.851525 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-catalog-content\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.851553 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.873385 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.873446 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.938693 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f8zg7\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.943945 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.953052 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pfl6d"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.953763 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.953938 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5brk\" (UniqueName: \"kubernetes.io/projected/39e99a4f-8956-424c-a4c6-7a67f9983cd0-kube-api-access-v5brk\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.953993 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-utilities\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.954029 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-catalog-content\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.954276 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.954448 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-catalog-content\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.955003 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-utilities\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.979775 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pfl6d"] Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.982875 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5brk\" (UniqueName: \"kubernetes.io/projected/39e99a4f-8956-424c-a4c6-7a67f9983cd0-kube-api-access-v5brk\") pod \"community-operators-4j7wh\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:14 crc kubenswrapper[4931]: I0130 05:10:14.989876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.009961 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.012537 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f08533c8c1e5e422c76f7d093f8442201c4a8c0eb6f27404db8c3f13e460af3d"} Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.012575 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3693cd0900980aaf02b98121d11ee70a79160d1598ac0fde1e855eb064cb7c4a"} Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.014237 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ac537307a51a0abf53217144aeb8e8505649d78e862fdfde9f5b7b863b8585ee"} Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.016734 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" event={"ID":"107d8fb1-31b1-4bec-8d55-a27e312609b1","Type":"ContainerStarted","Data":"18057f42a99b06d4067370e7058db919da5a13cb4cd6ff4eda3c3b5719b57209"} Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.018925 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2e170e479a37fb7f0f7ea50509dc59b9ba98d604e2d4a30f544e2559d870dec5"} Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.018953 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"75efaad328d23840ed695e63aadcacb49e97d45449f35d6f23aac0c16b4637c7"} Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.019250 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.061033 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn84w\" (UniqueName: \"kubernetes.io/projected/72ab8593-3b5e-421a-ac80-b85376b21ffe-kube-api-access-xn84w\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.061182 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-utilities\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.061367 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-catalog-content\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.070006 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.098042 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bj2bf" podStartSLOduration=10.09800645 podStartE2EDuration="10.09800645s" podCreationTimestamp="2026-01-30 05:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:15.067633292 +0000 UTC m=+150.437543549" watchObservedRunningTime="2026-01-30 05:10:15.09800645 +0000 UTC m=+150.467916707" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.162439 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-catalog-content\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.162516 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn84w\" (UniqueName: \"kubernetes.io/projected/72ab8593-3b5e-421a-ac80-b85376b21ffe-kube-api-access-xn84w\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.162574 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-utilities\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.163603 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-utilities\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.164828 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-catalog-content\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.200710 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn84w\" (UniqueName: \"kubernetes.io/projected/72ab8593-3b5e-421a-ac80-b85376b21ffe-kube-api-access-xn84w\") pod \"certified-operators-pfl6d\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.291006 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.351645 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5fcn"] Jan 30 05:10:15 crc kubenswrapper[4931]: W0130 05:10:15.390590 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9163b44e_4aa5_422c_a2fd_55747c8d506e.slice/crio-072fd632153afb9e250c4d51854168ac5eaa4674e8d0bff4bbfe11fe55d97dbc WatchSource:0}: Error finding container 072fd632153afb9e250c4d51854168ac5eaa4674e8d0bff4bbfe11fe55d97dbc: Status 404 returned error can't find the container with id 072fd632153afb9e250c4d51854168ac5eaa4674e8d0bff4bbfe11fe55d97dbc Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.402541 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frnwj"] Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.453241 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.522583 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f8zg7"] Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.547176 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:15 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:15 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:15 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.547251 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.591568 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j7wh"] Jan 30 05:10:15 crc kubenswrapper[4931]: W0130 05:10:15.616778 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39e99a4f_8956_424c_a4c6_7a67f9983cd0.slice/crio-450b4e5ad73b8f494223ffa7ed558e29541dc036fd17fb80d85509be86652339 WatchSource:0}: Error finding container 450b4e5ad73b8f494223ffa7ed558e29541dc036fd17fb80d85509be86652339: Status 404 returned error can't find the container with id 450b4e5ad73b8f494223ffa7ed558e29541dc036fd17fb80d85509be86652339 Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.710990 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pfl6d"] Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.946476 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.947216 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.949819 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.949817 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.956869 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.975016 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792f11bc-0559-4037-8c28-1628f1cc0ec7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:15 crc kubenswrapper[4931]: I0130 05:10:15.975120 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792f11bc-0559-4037-8c28-1628f1cc0ec7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.025138 4931 generic.go:334] "Generic (PLEG): container finished" podID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerID="47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b" exitCode=0 Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.025207 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frnwj" event={"ID":"0dbdc3df-7306-41e4-93c6-d7d27d481789","Type":"ContainerDied","Data":"47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.025256 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frnwj" event={"ID":"0dbdc3df-7306-41e4-93c6-d7d27d481789","Type":"ContainerStarted","Data":"8fc2bd9106d95cb2212067bb79c5743a637b67855826a61a2a9690fea3308441"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.027651 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.028370 4931 generic.go:334] "Generic (PLEG): container finished" podID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerID="0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893" exitCode=0 Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.028438 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j7wh" event={"ID":"39e99a4f-8956-424c-a4c6-7a67f9983cd0","Type":"ContainerDied","Data":"0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.028456 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j7wh" event={"ID":"39e99a4f-8956-424c-a4c6-7a67f9983cd0","Type":"ContainerStarted","Data":"450b4e5ad73b8f494223ffa7ed558e29541dc036fd17fb80d85509be86652339"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.030357 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfl6d" event={"ID":"72ab8593-3b5e-421a-ac80-b85376b21ffe","Type":"ContainerStarted","Data":"d2226ce036426e1065e325748a9372dea2501d4becb5598917d5e4c3d429e02b"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.033102 4931 generic.go:334] "Generic (PLEG): container finished" podID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerID="395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae" exitCode=0 Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.033155 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5fcn" event={"ID":"9163b44e-4aa5-422c-a2fd-55747c8d506e","Type":"ContainerDied","Data":"395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.033224 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5fcn" event={"ID":"9163b44e-4aa5-422c-a2fd-55747c8d506e","Type":"ContainerStarted","Data":"072fd632153afb9e250c4d51854168ac5eaa4674e8d0bff4bbfe11fe55d97dbc"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.035004 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" event={"ID":"32e4a367-9945-4fdb-b5bc-4c8d35512264","Type":"ContainerStarted","Data":"a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.035045 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" event={"ID":"32e4a367-9945-4fdb-b5bc-4c8d35512264","Type":"ContainerStarted","Data":"79ebc9473f22f72df11aa297cb419ebdd7c57ca36caf670a91a0d056621b7c54"} Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.035246 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.076478 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792f11bc-0559-4037-8c28-1628f1cc0ec7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.076571 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792f11bc-0559-4037-8c28-1628f1cc0ec7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.076652 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792f11bc-0559-4037-8c28-1628f1cc0ec7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.102442 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792f11bc-0559-4037-8c28-1628f1cc0ec7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.247380 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.252687 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8ndkb" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.263285 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.269448 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" podStartSLOduration=129.269402594 podStartE2EDuration="2m9.269402594s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:16.183946038 +0000 UTC m=+151.553856295" watchObservedRunningTime="2026-01-30 05:10:16.269402594 +0000 UTC m=+151.639312871" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.449230 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.449303 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.474551 4931 patch_prober.go:28] interesting pod/console-f9d7485db-ff4lr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.474644 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ff4lr" podUID="cf0e8eba-09e8-4d9c-87de-9c57583e7276" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.553744 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:16 crc kubenswrapper[4931]: [-]has-synced failed: reason withheld Jan 30 05:10:16 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:16 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.553809 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.569381 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7jp5s"] Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.570492 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.577698 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.583390 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8xcp\" (UniqueName: \"kubernetes.io/projected/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-kube-api-access-d8xcp\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.583468 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-utilities\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.583496 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-catalog-content\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.590676 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jp5s"] Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.628087 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.685374 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-catalog-content\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.685865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8xcp\" (UniqueName: \"kubernetes.io/projected/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-kube-api-access-d8xcp\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.685917 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-utilities\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.686351 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-catalog-content\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.688023 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-utilities\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.712497 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8xcp\" (UniqueName: \"kubernetes.io/projected/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-kube-api-access-d8xcp\") pod \"redhat-marketplace-7jp5s\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.773036 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wwdht" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.927183 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.956954 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mltbk"] Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.958695 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.960767 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mltbk"] Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.999810 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjhnn\" (UniqueName: \"kubernetes.io/projected/025f8209-dd2a-482c-8bb2-e0ad2a98a563-kube-api-access-xjhnn\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:16 crc kubenswrapper[4931]: I0130 05:10:16.999863 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-utilities\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:16.999912 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-catalog-content\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.045085 4931 generic.go:334] "Generic (PLEG): container finished" podID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerID="bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5" exitCode=0 Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.045898 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfl6d" event={"ID":"72ab8593-3b5e-421a-ac80-b85376b21ffe","Type":"ContainerDied","Data":"bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5"} Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.054003 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"792f11bc-0559-4037-8c28-1628f1cc0ec7","Type":"ContainerStarted","Data":"6d7890ef195ce83aca04b6656932ece915976829dbe4227034fe4866b13227f6"} Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.101319 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-catalog-content\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.103074 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjhnn\" (UniqueName: \"kubernetes.io/projected/025f8209-dd2a-482c-8bb2-e0ad2a98a563-kube-api-access-xjhnn\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.103106 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-utilities\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.104263 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-utilities\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.106460 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-catalog-content\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.125770 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjhnn\" (UniqueName: \"kubernetes.io/projected/025f8209-dd2a-482c-8bb2-e0ad2a98a563-kube-api-access-xjhnn\") pod \"redhat-marketplace-mltbk\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.246774 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jp5s"] Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.283439 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.540609 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.544658 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z64mf"] Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.546388 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.548464 4931 patch_prober.go:28] interesting pod/router-default-5444994796-268mt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 05:10:17 crc kubenswrapper[4931]: [+]has-synced ok Jan 30 05:10:17 crc kubenswrapper[4931]: [+]process-running ok Jan 30 05:10:17 crc kubenswrapper[4931]: healthz check failed Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.553057 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-268mt" podUID="21e8738e-651f-4f09-a052-1ff22028e3f3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.554843 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.562278 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z64mf"] Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.717354 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-catalog-content\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.717406 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gdhb\" (UniqueName: \"kubernetes.io/projected/bb356dde-8435-471d-a260-8966eeb15eb3-kube-api-access-9gdhb\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.717553 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-utilities\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.818862 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-utilities\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.819221 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-catalog-content\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.819438 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gdhb\" (UniqueName: \"kubernetes.io/projected/bb356dde-8435-471d-a260-8966eeb15eb3-kube-api-access-9gdhb\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.819953 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-catalog-content\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.820209 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-utilities\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.848505 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gdhb\" (UniqueName: \"kubernetes.io/projected/bb356dde-8435-471d-a260-8966eeb15eb3-kube-api-access-9gdhb\") pod \"redhat-operators-z64mf\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.850773 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mltbk"] Jan 30 05:10:17 crc kubenswrapper[4931]: W0130 05:10:17.872286 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod025f8209_dd2a_482c_8bb2_e0ad2a98a563.slice/crio-38ccccf46a0ae1cf8c452695abc11e45f7b2dbc3854f74a91d1dd3b017d08519 WatchSource:0}: Error finding container 38ccccf46a0ae1cf8c452695abc11e45f7b2dbc3854f74a91d1dd3b017d08519: Status 404 returned error can't find the container with id 38ccccf46a0ae1cf8c452695abc11e45f7b2dbc3854f74a91d1dd3b017d08519 Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.897197 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.945667 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56dq5"] Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.948810 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:17 crc kubenswrapper[4931]: I0130 05:10:17.966052 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56dq5"] Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.018235 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbgzs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.018304 4931 patch_prober.go:28] interesting pod/downloads-7954f5f757-tbgzs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.018308 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tbgzs" podUID="fc8b1aac-27e5-4f8c-a329-821c231fb7c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.018454 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tbgzs" podUID="fc8b1aac-27e5-4f8c-a329-821c231fb7c6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.027736 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.034682 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pspt5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.094301 4931 generic.go:334] "Generic (PLEG): container finished" podID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerID="ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2" exitCode=0 Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.094664 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jp5s" event={"ID":"9ac0e0dc-4375-4faf-a262-2cf4e9772a29","Type":"ContainerDied","Data":"ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2"} Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.094694 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jp5s" event={"ID":"9ac0e0dc-4375-4faf-a262-2cf4e9772a29","Type":"ContainerStarted","Data":"827a507dec87e3e9291f3f56b6d8162668e69da1d6e51e16d8c5431ea4ab1518"} Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.107018 4931 generic.go:334] "Generic (PLEG): container finished" podID="1a8f99a6-f163-4720-8eb4-bc8607753d79" containerID="76f686a64b7bcba52e9cf572d78b41631a5873f435dbdf098126fe32ac5ccc3f" exitCode=0 Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.107137 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" event={"ID":"1a8f99a6-f163-4720-8eb4-bc8607753d79","Type":"ContainerDied","Data":"76f686a64b7bcba52e9cf572d78b41631a5873f435dbdf098126fe32ac5ccc3f"} Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.111307 4931 generic.go:334] "Generic (PLEG): container finished" podID="792f11bc-0559-4037-8c28-1628f1cc0ec7" containerID="7e807662008e3ca4617fe0888edfc68608e136fe58951ccd21cc89ffd24e6aaa" exitCode=0 Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.111364 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"792f11bc-0559-4037-8c28-1628f1cc0ec7","Type":"ContainerDied","Data":"7e807662008e3ca4617fe0888edfc68608e136fe58951ccd21cc89ffd24e6aaa"} Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.124895 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mltbk" event={"ID":"025f8209-dd2a-482c-8bb2-e0ad2a98a563","Type":"ContainerStarted","Data":"38ccccf46a0ae1cf8c452695abc11e45f7b2dbc3854f74a91d1dd3b017d08519"} Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.127087 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-catalog-content\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.127230 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-utilities\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.127283 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnh5\" (UniqueName: \"kubernetes.io/projected/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-kube-api-access-2dnh5\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.175791 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.228162 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-catalog-content\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.228252 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-utilities\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.228288 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dnh5\" (UniqueName: \"kubernetes.io/projected/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-kube-api-access-2dnh5\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.232571 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-catalog-content\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.232621 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-utilities\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.255738 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dnh5\" (UniqueName: \"kubernetes.io/projected/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-kube-api-access-2dnh5\") pod \"redhat-operators-56dq5\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.282969 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.543507 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.546989 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-268mt" Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.559452 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z64mf"] Jan 30 05:10:18 crc kubenswrapper[4931]: I0130 05:10:18.773647 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56dq5"] Jan 30 05:10:18 crc kubenswrapper[4931]: W0130 05:10:18.788712 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e9dd69f_1c2e_4b14_83f8_dff33fe2118d.slice/crio-54c769092679e1f74ddf76a97ce7a50ba4a68138b396c7ae26403fddbe513fc1 WatchSource:0}: Error finding container 54c769092679e1f74ddf76a97ce7a50ba4a68138b396c7ae26403fddbe513fc1: Status 404 returned error can't find the container with id 54c769092679e1f74ddf76a97ce7a50ba4a68138b396c7ae26403fddbe513fc1 Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.159782 4931 generic.go:334] "Generic (PLEG): container finished" podID="bb356dde-8435-471d-a260-8966eeb15eb3" containerID="27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f" exitCode=0 Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.160500 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64mf" event={"ID":"bb356dde-8435-471d-a260-8966eeb15eb3","Type":"ContainerDied","Data":"27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f"} Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.160541 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64mf" event={"ID":"bb356dde-8435-471d-a260-8966eeb15eb3","Type":"ContainerStarted","Data":"a6a9276eab6557cd642ac08c2583f1c3b08c9bbb62478c22c66b2f818922633b"} Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.170204 4931 generic.go:334] "Generic (PLEG): container finished" podID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerID="95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836" exitCode=0 Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.170310 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mltbk" event={"ID":"025f8209-dd2a-482c-8bb2-e0ad2a98a563","Type":"ContainerDied","Data":"95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836"} Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.200864 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56dq5" event={"ID":"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d","Type":"ContainerStarted","Data":"54c769092679e1f74ddf76a97ce7a50ba4a68138b396c7ae26403fddbe513fc1"} Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.570653 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.600412 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.659584 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792f11bc-0559-4037-8c28-1628f1cc0ec7-kubelet-dir\") pod \"792f11bc-0559-4037-8c28-1628f1cc0ec7\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.659723 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/792f11bc-0559-4037-8c28-1628f1cc0ec7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "792f11bc-0559-4037-8c28-1628f1cc0ec7" (UID: "792f11bc-0559-4037-8c28-1628f1cc0ec7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.659902 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792f11bc-0559-4037-8c28-1628f1cc0ec7-kube-api-access\") pod \"792f11bc-0559-4037-8c28-1628f1cc0ec7\" (UID: \"792f11bc-0559-4037-8c28-1628f1cc0ec7\") " Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.660331 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/792f11bc-0559-4037-8c28-1628f1cc0ec7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.666509 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792f11bc-0559-4037-8c28-1628f1cc0ec7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "792f11bc-0559-4037-8c28-1628f1cc0ec7" (UID: "792f11bc-0559-4037-8c28-1628f1cc0ec7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.761079 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-855th\" (UniqueName: \"kubernetes.io/projected/1a8f99a6-f163-4720-8eb4-bc8607753d79-kube-api-access-855th\") pod \"1a8f99a6-f163-4720-8eb4-bc8607753d79\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.761157 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a8f99a6-f163-4720-8eb4-bc8607753d79-secret-volume\") pod \"1a8f99a6-f163-4720-8eb4-bc8607753d79\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.761213 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a8f99a6-f163-4720-8eb4-bc8607753d79-config-volume\") pod \"1a8f99a6-f163-4720-8eb4-bc8607753d79\" (UID: \"1a8f99a6-f163-4720-8eb4-bc8607753d79\") " Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.761748 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/792f11bc-0559-4037-8c28-1628f1cc0ec7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.762601 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a8f99a6-f163-4720-8eb4-bc8607753d79-config-volume" (OuterVolumeSpecName: "config-volume") pod "1a8f99a6-f163-4720-8eb4-bc8607753d79" (UID: "1a8f99a6-f163-4720-8eb4-bc8607753d79"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.765102 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a8f99a6-f163-4720-8eb4-bc8607753d79-kube-api-access-855th" (OuterVolumeSpecName: "kube-api-access-855th") pod "1a8f99a6-f163-4720-8eb4-bc8607753d79" (UID: "1a8f99a6-f163-4720-8eb4-bc8607753d79"). InnerVolumeSpecName "kube-api-access-855th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.765346 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a8f99a6-f163-4720-8eb4-bc8607753d79-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1a8f99a6-f163-4720-8eb4-bc8607753d79" (UID: "1a8f99a6-f163-4720-8eb4-bc8607753d79"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.863009 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1a8f99a6-f163-4720-8eb4-bc8607753d79-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.863045 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a8f99a6-f163-4720-8eb4-bc8607753d79-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:19 crc kubenswrapper[4931]: I0130 05:10:19.863066 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-855th\" (UniqueName: \"kubernetes.io/projected/1a8f99a6-f163-4720-8eb4-bc8607753d79-kube-api-access-855th\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.214759 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"792f11bc-0559-4037-8c28-1628f1cc0ec7","Type":"ContainerDied","Data":"6d7890ef195ce83aca04b6656932ece915976829dbe4227034fe4866b13227f6"} Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.214826 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d7890ef195ce83aca04b6656932ece915976829dbe4227034fe4866b13227f6" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.214906 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.237550 4931 generic.go:334] "Generic (PLEG): container finished" podID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerID="c23bf00de71e269e3c5c3d32d2b7e2842aa494dd5f905c0feee3ad4799d5aa22" exitCode=0 Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.237750 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56dq5" event={"ID":"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d","Type":"ContainerDied","Data":"c23bf00de71e269e3c5c3d32d2b7e2842aa494dd5f905c0feee3ad4799d5aa22"} Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.244770 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" event={"ID":"1a8f99a6-f163-4720-8eb4-bc8607753d79","Type":"ContainerDied","Data":"925d7ec4214d424008eeb73fc8925f29c574b109b85902152a8bda78b7583feb"} Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.244820 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="925d7ec4214d424008eeb73fc8925f29c574b109b85902152a8bda78b7583feb" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.244825 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.568791 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 05:10:20 crc kubenswrapper[4931]: E0130 05:10:20.569060 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a8f99a6-f163-4720-8eb4-bc8607753d79" containerName="collect-profiles" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.569072 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a8f99a6-f163-4720-8eb4-bc8607753d79" containerName="collect-profiles" Jan 30 05:10:20 crc kubenswrapper[4931]: E0130 05:10:20.569085 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792f11bc-0559-4037-8c28-1628f1cc0ec7" containerName="pruner" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.569093 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="792f11bc-0559-4037-8c28-1628f1cc0ec7" containerName="pruner" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.569207 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="792f11bc-0559-4037-8c28-1628f1cc0ec7" containerName="pruner" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.569217 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a8f99a6-f163-4720-8eb4-bc8607753d79" containerName="collect-profiles" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.569610 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.573481 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.574650 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.576897 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.673498 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd46484-5f82-4786-a8a1-25484b70f820-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.673637 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd46484-5f82-4786-a8a1-25484b70f820-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.775607 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd46484-5f82-4786-a8a1-25484b70f820-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.775727 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd46484-5f82-4786-a8a1-25484b70f820-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.775769 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd46484-5f82-4786-a8a1-25484b70f820-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.795641 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd46484-5f82-4786-a8a1-25484b70f820-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:20 crc kubenswrapper[4931]: I0130 05:10:20.889824 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:21 crc kubenswrapper[4931]: I0130 05:10:21.511158 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 05:10:22 crc kubenswrapper[4931]: I0130 05:10:22.296073 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcd46484-5f82-4786-a8a1-25484b70f820","Type":"ContainerStarted","Data":"7f8a5441fbe7c584c757919eb28cbefcf2c8e76103b55e4ac23031a1715dee4e"} Jan 30 05:10:23 crc kubenswrapper[4931]: I0130 05:10:23.281916 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4phnt" Jan 30 05:10:23 crc kubenswrapper[4931]: I0130 05:10:23.312309 4931 generic.go:334] "Generic (PLEG): container finished" podID="dcd46484-5f82-4786-a8a1-25484b70f820" containerID="1f13b2e50cc2004031bacb2a31e30974e49e8bb1676e761b872966edb1b3f54f" exitCode=0 Jan 30 05:10:23 crc kubenswrapper[4931]: I0130 05:10:23.312347 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcd46484-5f82-4786-a8a1-25484b70f820","Type":"ContainerDied","Data":"1f13b2e50cc2004031bacb2a31e30974e49e8bb1676e761b872966edb1b3f54f"} Jan 30 05:10:26 crc kubenswrapper[4931]: I0130 05:10:26.480048 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:26 crc kubenswrapper[4931]: I0130 05:10:26.484061 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:10:27 crc kubenswrapper[4931]: I0130 05:10:27.363771 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:10:27 crc kubenswrapper[4931]: I0130 05:10:27.364369 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:10:28 crc kubenswrapper[4931]: I0130 05:10:28.030731 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tbgzs" Jan 30 05:10:30 crc kubenswrapper[4931]: I0130 05:10:30.088131 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:10:30 crc kubenswrapper[4931]: I0130 05:10:30.096247 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1421762e-4873-46cb-8c43-b8faa0cbca62-metrics-certs\") pod \"network-metrics-daemon-gt48b\" (UID: \"1421762e-4873-46cb-8c43-b8faa0cbca62\") " pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:10:30 crc kubenswrapper[4931]: I0130 05:10:30.268316 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gt48b" Jan 30 05:10:34 crc kubenswrapper[4931]: I0130 05:10:34.997873 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:10:36 crc kubenswrapper[4931]: I0130 05:10:36.959248 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.124367 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd46484-5f82-4786-a8a1-25484b70f820-kube-api-access\") pod \"dcd46484-5f82-4786-a8a1-25484b70f820\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.124496 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd46484-5f82-4786-a8a1-25484b70f820-kubelet-dir\") pod \"dcd46484-5f82-4786-a8a1-25484b70f820\" (UID: \"dcd46484-5f82-4786-a8a1-25484b70f820\") " Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.124708 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcd46484-5f82-4786-a8a1-25484b70f820-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dcd46484-5f82-4786-a8a1-25484b70f820" (UID: "dcd46484-5f82-4786-a8a1-25484b70f820"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.124874 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcd46484-5f82-4786-a8a1-25484b70f820-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.130440 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd46484-5f82-4786-a8a1-25484b70f820-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dcd46484-5f82-4786-a8a1-25484b70f820" (UID: "dcd46484-5f82-4786-a8a1-25484b70f820"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.226317 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcd46484-5f82-4786-a8a1-25484b70f820-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.419679 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dcd46484-5f82-4786-a8a1-25484b70f820","Type":"ContainerDied","Data":"7f8a5441fbe7c584c757919eb28cbefcf2c8e76103b55e4ac23031a1715dee4e"} Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.419728 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f8a5441fbe7c584c757919eb28cbefcf2c8e76103b55e4ac23031a1715dee4e" Jan 30 05:10:37 crc kubenswrapper[4931]: I0130 05:10:37.419746 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.066822 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.067938 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gdhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-z64mf_openshift-marketplace(bb356dde-8435-471d-a260-8966eeb15eb3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.072927 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-z64mf" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.173057 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.173213 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn84w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pfl6d_openshift-marketplace(72ab8593-3b5e-421a-ac80-b85376b21ffe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.174400 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pfl6d" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.185598 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.185719 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8xcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7jp5s_openshift-marketplace(9ac0e0dc-4375-4faf-a262-2cf4e9772a29): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.187612 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7jp5s" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.225651 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.226251 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dnh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-56dq5_openshift-marketplace(7e9dd69f-1c2e-4b14-83f8-dff33fe2118d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.227713 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.227832 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg6xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k5fcn_openshift-marketplace(9163b44e-4aa5-422c-a2fd-55747c8d506e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.228948 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-k5fcn" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.232132 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-56dq5" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" Jan 30 05:10:45 crc kubenswrapper[4931]: I0130 05:10:45.323855 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gt48b"] Jan 30 05:10:45 crc kubenswrapper[4931]: W0130 05:10:45.331700 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1421762e_4873_46cb_8c43_b8faa0cbca62.slice/crio-af78854884472a71530d82d034faa6f6359f0652a628d54776f28d6a26b04995 WatchSource:0}: Error finding container af78854884472a71530d82d034faa6f6359f0652a628d54776f28d6a26b04995: Status 404 returned error can't find the container with id af78854884472a71530d82d034faa6f6359f0652a628d54776f28d6a26b04995 Jan 30 05:10:45 crc kubenswrapper[4931]: I0130 05:10:45.478824 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j7wh" event={"ID":"39e99a4f-8956-424c-a4c6-7a67f9983cd0","Type":"ContainerStarted","Data":"924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9"} Jan 30 05:10:45 crc kubenswrapper[4931]: I0130 05:10:45.484160 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gt48b" event={"ID":"1421762e-4873-46cb-8c43-b8faa0cbca62","Type":"ContainerStarted","Data":"af78854884472a71530d82d034faa6f6359f0652a628d54776f28d6a26b04995"} Jan 30 05:10:45 crc kubenswrapper[4931]: I0130 05:10:45.486702 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mltbk" event={"ID":"025f8209-dd2a-482c-8bb2-e0ad2a98a563","Type":"ContainerStarted","Data":"8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111"} Jan 30 05:10:45 crc kubenswrapper[4931]: I0130 05:10:45.488240 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frnwj" event={"ID":"0dbdc3df-7306-41e4-93c6-d7d27d481789","Type":"ContainerStarted","Data":"a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8"} Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.493400 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-56dq5" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.493994 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-z64mf" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.494140 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-k5fcn" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.494843 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7jp5s" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" Jan 30 05:10:45 crc kubenswrapper[4931]: E0130 05:10:45.494961 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pfl6d" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.497055 4931 generic.go:334] "Generic (PLEG): container finished" podID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerID="924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9" exitCode=0 Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.497151 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j7wh" event={"ID":"39e99a4f-8956-424c-a4c6-7a67f9983cd0","Type":"ContainerDied","Data":"924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9"} Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.503242 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gt48b" event={"ID":"1421762e-4873-46cb-8c43-b8faa0cbca62","Type":"ContainerStarted","Data":"a598cff63bbc39cb13ce46e815b31cc5173cc8625f4cbbeedd2e4a6af3e83182"} Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.503300 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gt48b" event={"ID":"1421762e-4873-46cb-8c43-b8faa0cbca62","Type":"ContainerStarted","Data":"254617a263e25220ad4eb40ffafa564e067a606620a8cbddb9e3a5d832e1ee94"} Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.508194 4931 generic.go:334] "Generic (PLEG): container finished" podID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerID="8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111" exitCode=0 Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.508325 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mltbk" event={"ID":"025f8209-dd2a-482c-8bb2-e0ad2a98a563","Type":"ContainerDied","Data":"8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111"} Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.512656 4931 generic.go:334] "Generic (PLEG): container finished" podID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerID="a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8" exitCode=0 Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.512695 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frnwj" event={"ID":"0dbdc3df-7306-41e4-93c6-d7d27d481789","Type":"ContainerDied","Data":"a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8"} Jan 30 05:10:46 crc kubenswrapper[4931]: I0130 05:10:46.553697 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gt48b" podStartSLOduration=159.553659824 podStartE2EDuration="2m39.553659824s" podCreationTimestamp="2026-01-30 05:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:46.547681507 +0000 UTC m=+181.917591794" watchObservedRunningTime="2026-01-30 05:10:46.553659824 +0000 UTC m=+181.923570091" Jan 30 05:10:47 crc kubenswrapper[4931]: I0130 05:10:47.521108 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frnwj" event={"ID":"0dbdc3df-7306-41e4-93c6-d7d27d481789","Type":"ContainerStarted","Data":"df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca"} Jan 30 05:10:47 crc kubenswrapper[4931]: I0130 05:10:47.523914 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j7wh" event={"ID":"39e99a4f-8956-424c-a4c6-7a67f9983cd0","Type":"ContainerStarted","Data":"8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534"} Jan 30 05:10:47 crc kubenswrapper[4931]: I0130 05:10:47.528616 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mltbk" event={"ID":"025f8209-dd2a-482c-8bb2-e0ad2a98a563","Type":"ContainerStarted","Data":"61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573"} Jan 30 05:10:47 crc kubenswrapper[4931]: I0130 05:10:47.545488 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-frnwj" podStartSLOduration=2.457923278 podStartE2EDuration="33.545466139s" podCreationTimestamp="2026-01-30 05:10:14 +0000 UTC" firstStartedPulling="2026-01-30 05:10:16.027396054 +0000 UTC m=+151.397306311" lastFinishedPulling="2026-01-30 05:10:47.114938915 +0000 UTC m=+182.484849172" observedRunningTime="2026-01-30 05:10:47.541738031 +0000 UTC m=+182.911648338" watchObservedRunningTime="2026-01-30 05:10:47.545466139 +0000 UTC m=+182.915376406" Jan 30 05:10:47 crc kubenswrapper[4931]: I0130 05:10:47.570154 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4j7wh" podStartSLOduration=2.605764374 podStartE2EDuration="33.570112717s" podCreationTimestamp="2026-01-30 05:10:14 +0000 UTC" firstStartedPulling="2026-01-30 05:10:16.031850861 +0000 UTC m=+151.401761118" lastFinishedPulling="2026-01-30 05:10:46.996199214 +0000 UTC m=+182.366109461" observedRunningTime="2026-01-30 05:10:47.569160192 +0000 UTC m=+182.939070459" watchObservedRunningTime="2026-01-30 05:10:47.570112717 +0000 UTC m=+182.940022984" Jan 30 05:10:47 crc kubenswrapper[4931]: I0130 05:10:47.607315 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mltbk" podStartSLOduration=3.837449271 podStartE2EDuration="31.607284933s" podCreationTimestamp="2026-01-30 05:10:16 +0000 UTC" firstStartedPulling="2026-01-30 05:10:19.185225881 +0000 UTC m=+154.555136138" lastFinishedPulling="2026-01-30 05:10:46.955061543 +0000 UTC m=+182.324971800" observedRunningTime="2026-01-30 05:10:47.600564997 +0000 UTC m=+182.970475264" watchObservedRunningTime="2026-01-30 05:10:47.607284933 +0000 UTC m=+182.977195190" Jan 30 05:10:48 crc kubenswrapper[4931]: I0130 05:10:48.504533 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mg25x" Jan 30 05:10:53 crc kubenswrapper[4931]: I0130 05:10:53.461470 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 05:10:54 crc kubenswrapper[4931]: I0130 05:10:54.945643 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:54 crc kubenswrapper[4931]: I0130 05:10:54.945718 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.070780 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.089481 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.146535 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.146722 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.154452 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 05:10:55 crc kubenswrapper[4931]: E0130 05:10:55.154739 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd46484-5f82-4786-a8a1-25484b70f820" containerName="pruner" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.154755 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd46484-5f82-4786-a8a1-25484b70f820" containerName="pruner" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.154856 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd46484-5f82-4786-a8a1-25484b70f820" containerName="pruner" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.155280 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.157253 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.158054 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.168408 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.260891 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.264809 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.366229 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.366326 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.366808 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.385019 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.499524 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.629489 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.631631 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.724381 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 05:10:55 crc kubenswrapper[4931]: I0130 05:10:55.950730 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ww4ml"] Jan 30 05:10:56 crc kubenswrapper[4931]: I0130 05:10:56.585505 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc","Type":"ContainerStarted","Data":"ad1c13e14a492a607796de98bd778ea69e3d65c16697f4c694aa005acd953049"} Jan 30 05:10:56 crc kubenswrapper[4931]: I0130 05:10:56.585841 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc","Type":"ContainerStarted","Data":"3e88c70970ead77b66a99a43f05bacb873bb709d4d21867e011b9a24c1f4cf06"} Jan 30 05:10:56 crc kubenswrapper[4931]: I0130 05:10:56.603710 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.603689546 podStartE2EDuration="1.603689546s" podCreationTimestamp="2026-01-30 05:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:10:56.602137385 +0000 UTC m=+191.972047642" watchObservedRunningTime="2026-01-30 05:10:56.603689546 +0000 UTC m=+191.973599803" Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.283774 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.283995 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.338243 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:57 crc kubenswrapper[4931]: E0130 05:10:57.359627 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod4e2d9da0_86e1_4a44_a714_6ca3d2d32edc.slice/crio-conmon-ad1c13e14a492a607796de98bd778ea69e3d65c16697f4c694aa005acd953049.scope\": RecentStats: unable to find data in memory cache]" Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.363055 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.363122 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.393745 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j7wh"] Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.593316 4931 generic.go:334] "Generic (PLEG): container finished" podID="4e2d9da0-86e1-4a44-a714-6ca3d2d32edc" containerID="ad1c13e14a492a607796de98bd778ea69e3d65c16697f4c694aa005acd953049" exitCode=0 Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.593442 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc","Type":"ContainerDied","Data":"ad1c13e14a492a607796de98bd778ea69e3d65c16697f4c694aa005acd953049"} Jan 30 05:10:57 crc kubenswrapper[4931]: I0130 05:10:57.657613 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:10:58 crc kubenswrapper[4931]: I0130 05:10:58.599909 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4j7wh" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="registry-server" containerID="cri-o://8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534" gracePeriod=2 Jan 30 05:10:58 crc kubenswrapper[4931]: I0130 05:10:58.840546 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:58.992390 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.024497 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kube-api-access\") pod \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.024598 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kubelet-dir\") pod \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\" (UID: \"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc\") " Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.024734 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4e2d9da0-86e1-4a44-a714-6ca3d2d32edc" (UID: "4e2d9da0-86e1-4a44-a714-6ca3d2d32edc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.025316 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.032305 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4e2d9da0-86e1-4a44-a714-6ca3d2d32edc" (UID: "4e2d9da0-86e1-4a44-a714-6ca3d2d32edc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.126602 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-catalog-content\") pod \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.126835 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5brk\" (UniqueName: \"kubernetes.io/projected/39e99a4f-8956-424c-a4c6-7a67f9983cd0-kube-api-access-v5brk\") pod \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.126887 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-utilities\") pod \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\" (UID: \"39e99a4f-8956-424c-a4c6-7a67f9983cd0\") " Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.127137 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e2d9da0-86e1-4a44-a714-6ca3d2d32edc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.127736 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-utilities" (OuterVolumeSpecName: "utilities") pod "39e99a4f-8956-424c-a4c6-7a67f9983cd0" (UID: "39e99a4f-8956-424c-a4c6-7a67f9983cd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.132895 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e99a4f-8956-424c-a4c6-7a67f9983cd0-kube-api-access-v5brk" (OuterVolumeSpecName: "kube-api-access-v5brk") pod "39e99a4f-8956-424c-a4c6-7a67f9983cd0" (UID: "39e99a4f-8956-424c-a4c6-7a67f9983cd0"). InnerVolumeSpecName "kube-api-access-v5brk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.190291 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39e99a4f-8956-424c-a4c6-7a67f9983cd0" (UID: "39e99a4f-8956-424c-a4c6-7a67f9983cd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.228162 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5brk\" (UniqueName: \"kubernetes.io/projected/39e99a4f-8956-424c-a4c6-7a67f9983cd0-kube-api-access-v5brk\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.228201 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.228213 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e99a4f-8956-424c-a4c6-7a67f9983cd0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.596732 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mltbk"] Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.606859 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4e2d9da0-86e1-4a44-a714-6ca3d2d32edc","Type":"ContainerDied","Data":"3e88c70970ead77b66a99a43f05bacb873bb709d4d21867e011b9a24c1f4cf06"} Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.606913 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e88c70970ead77b66a99a43f05bacb873bb709d4d21867e011b9a24c1f4cf06" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.607017 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.609556 4931 generic.go:334] "Generic (PLEG): container finished" podID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerID="8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534" exitCode=0 Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.609601 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j7wh" event={"ID":"39e99a4f-8956-424c-a4c6-7a67f9983cd0","Type":"ContainerDied","Data":"8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534"} Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.609620 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j7wh" event={"ID":"39e99a4f-8956-424c-a4c6-7a67f9983cd0","Type":"ContainerDied","Data":"450b4e5ad73b8f494223ffa7ed558e29541dc036fd17fb80d85509be86652339"} Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.609638 4931 scope.go:117] "RemoveContainer" containerID="8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.609735 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j7wh" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.612891 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfl6d" event={"ID":"72ab8593-3b5e-421a-ac80-b85376b21ffe","Type":"ContainerStarted","Data":"128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e"} Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.615848 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56dq5" event={"ID":"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d","Type":"ContainerStarted","Data":"6a25d2a944e2534b4ce44d83ba3e3a50d7e29e3ca3c70fb2a75b736f90d90dec"} Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.621590 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5fcn" event={"ID":"9163b44e-4aa5-422c-a2fd-55747c8d506e","Type":"ContainerStarted","Data":"70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6"} Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.629837 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j7wh"] Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.633506 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4j7wh"] Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.637872 4931 scope.go:117] "RemoveContainer" containerID="924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.692111 4931 scope.go:117] "RemoveContainer" containerID="0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.807030 4931 scope.go:117] "RemoveContainer" containerID="8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534" Jan 30 05:10:59 crc kubenswrapper[4931]: E0130 05:10:59.808009 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534\": container with ID starting with 8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534 not found: ID does not exist" containerID="8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.808070 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534"} err="failed to get container status \"8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534\": rpc error: code = NotFound desc = could not find container \"8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534\": container with ID starting with 8983535f739cee230963069e641cbe7d953fb87dff0ee7647e2b412a361bd534 not found: ID does not exist" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.808155 4931 scope.go:117] "RemoveContainer" containerID="924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9" Jan 30 05:10:59 crc kubenswrapper[4931]: E0130 05:10:59.808543 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9\": container with ID starting with 924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9 not found: ID does not exist" containerID="924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.808608 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9"} err="failed to get container status \"924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9\": rpc error: code = NotFound desc = could not find container \"924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9\": container with ID starting with 924d3fe053808cb7c25239d42d036e4a8bda434c0ff0f8a8ede24d716ee714c9 not found: ID does not exist" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.808644 4931 scope.go:117] "RemoveContainer" containerID="0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893" Jan 30 05:10:59 crc kubenswrapper[4931]: E0130 05:10:59.809044 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893\": container with ID starting with 0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893 not found: ID does not exist" containerID="0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893" Jan 30 05:10:59 crc kubenswrapper[4931]: I0130 05:10:59.809078 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893"} err="failed to get container status \"0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893\": rpc error: code = NotFound desc = could not find container \"0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893\": container with ID starting with 0ab90532d3015b014ea0416fbd3c969102b9cb3b660fd71f10ccc1460320a893 not found: ID does not exist" Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.630293 4931 generic.go:334] "Generic (PLEG): container finished" podID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerID="128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e" exitCode=0 Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.630328 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfl6d" event={"ID":"72ab8593-3b5e-421a-ac80-b85376b21ffe","Type":"ContainerDied","Data":"128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e"} Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.633750 4931 generic.go:334] "Generic (PLEG): container finished" podID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerID="6a25d2a944e2534b4ce44d83ba3e3a50d7e29e3ca3c70fb2a75b736f90d90dec" exitCode=0 Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.633804 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56dq5" event={"ID":"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d","Type":"ContainerDied","Data":"6a25d2a944e2534b4ce44d83ba3e3a50d7e29e3ca3c70fb2a75b736f90d90dec"} Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.637137 4931 generic.go:334] "Generic (PLEG): container finished" podID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerID="70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6" exitCode=0 Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.637220 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5fcn" event={"ID":"9163b44e-4aa5-422c-a2fd-55747c8d506e","Type":"ContainerDied","Data":"70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6"} Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.639534 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64mf" event={"ID":"bb356dde-8435-471d-a260-8966eeb15eb3","Type":"ContainerStarted","Data":"6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06"} Jan 30 05:11:00 crc kubenswrapper[4931]: I0130 05:11:00.645485 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mltbk" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="registry-server" containerID="cri-o://61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573" gracePeriod=2 Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.055694 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.092619 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-utilities\") pod \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.092683 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjhnn\" (UniqueName: \"kubernetes.io/projected/025f8209-dd2a-482c-8bb2-e0ad2a98a563-kube-api-access-xjhnn\") pod \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.092716 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-catalog-content\") pod \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\" (UID: \"025f8209-dd2a-482c-8bb2-e0ad2a98a563\") " Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.094625 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-utilities" (OuterVolumeSpecName: "utilities") pod "025f8209-dd2a-482c-8bb2-e0ad2a98a563" (UID: "025f8209-dd2a-482c-8bb2-e0ad2a98a563"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.101540 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025f8209-dd2a-482c-8bb2-e0ad2a98a563-kube-api-access-xjhnn" (OuterVolumeSpecName: "kube-api-access-xjhnn") pod "025f8209-dd2a-482c-8bb2-e0ad2a98a563" (UID: "025f8209-dd2a-482c-8bb2-e0ad2a98a563"). InnerVolumeSpecName "kube-api-access-xjhnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.129645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "025f8209-dd2a-482c-8bb2-e0ad2a98a563" (UID: "025f8209-dd2a-482c-8bb2-e0ad2a98a563"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.193510 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjhnn\" (UniqueName: \"kubernetes.io/projected/025f8209-dd2a-482c-8bb2-e0ad2a98a563-kube-api-access-xjhnn\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.193552 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.193565 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/025f8209-dd2a-482c-8bb2-e0ad2a98a563-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.433403 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" path="/var/lib/kubelet/pods/39e99a4f-8956-424c-a4c6-7a67f9983cd0/volumes" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.655164 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfl6d" event={"ID":"72ab8593-3b5e-421a-ac80-b85376b21ffe","Type":"ContainerStarted","Data":"22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed"} Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.659983 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5fcn" event={"ID":"9163b44e-4aa5-422c-a2fd-55747c8d506e","Type":"ContainerStarted","Data":"6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa"} Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.674336 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56dq5" event={"ID":"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d","Type":"ContainerStarted","Data":"98f1a2d73272f9f16ee133a746115272f4fdf7cbb2973402df8b866824230ca2"} Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.678877 4931 generic.go:334] "Generic (PLEG): container finished" podID="bb356dde-8435-471d-a260-8966eeb15eb3" containerID="6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06" exitCode=0 Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.678993 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64mf" event={"ID":"bb356dde-8435-471d-a260-8966eeb15eb3","Type":"ContainerDied","Data":"6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06"} Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.690277 4931 generic.go:334] "Generic (PLEG): container finished" podID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerID="61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573" exitCode=0 Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.690560 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mltbk" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.691625 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mltbk" event={"ID":"025f8209-dd2a-482c-8bb2-e0ad2a98a563","Type":"ContainerDied","Data":"61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573"} Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.691682 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mltbk" event={"ID":"025f8209-dd2a-482c-8bb2-e0ad2a98a563","Type":"ContainerDied","Data":"38ccccf46a0ae1cf8c452695abc11e45f7b2dbc3854f74a91d1dd3b017d08519"} Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.691719 4931 scope.go:117] "RemoveContainer" containerID="61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.693943 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pfl6d" podStartSLOduration=3.6294463500000003 podStartE2EDuration="47.693914661s" podCreationTimestamp="2026-01-30 05:10:14 +0000 UTC" firstStartedPulling="2026-01-30 05:10:17.048445826 +0000 UTC m=+152.418356083" lastFinishedPulling="2026-01-30 05:11:01.112914137 +0000 UTC m=+196.482824394" observedRunningTime="2026-01-30 05:11:01.685629354 +0000 UTC m=+197.055539651" watchObservedRunningTime="2026-01-30 05:11:01.693914661 +0000 UTC m=+197.063824918" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.696376 4931 generic.go:334] "Generic (PLEG): container finished" podID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerID="709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd" exitCode=0 Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.696454 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jp5s" event={"ID":"9ac0e0dc-4375-4faf-a262-2cf4e9772a29","Type":"ContainerDied","Data":"709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd"} Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.729512 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5fcn" podStartSLOduration=2.644228646 podStartE2EDuration="47.729478542s" podCreationTimestamp="2026-01-30 05:10:14 +0000 UTC" firstStartedPulling="2026-01-30 05:10:16.037483989 +0000 UTC m=+151.407394246" lastFinishedPulling="2026-01-30 05:11:01.122733885 +0000 UTC m=+196.492644142" observedRunningTime="2026-01-30 05:11:01.712928308 +0000 UTC m=+197.082838575" watchObservedRunningTime="2026-01-30 05:11:01.729478542 +0000 UTC m=+197.099388799" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.731310 4931 scope.go:117] "RemoveContainer" containerID="8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.757398 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56dq5" podStartSLOduration=3.925124056 podStartE2EDuration="44.757366074s" podCreationTimestamp="2026-01-30 05:10:17 +0000 UTC" firstStartedPulling="2026-01-30 05:10:20.240440682 +0000 UTC m=+155.610350939" lastFinishedPulling="2026-01-30 05:11:01.0726827 +0000 UTC m=+196.442592957" observedRunningTime="2026-01-30 05:11:01.751497619 +0000 UTC m=+197.121407906" watchObservedRunningTime="2026-01-30 05:11:01.757366074 +0000 UTC m=+197.127276331" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.766664 4931 scope.go:117] "RemoveContainer" containerID="95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.790297 4931 scope.go:117] "RemoveContainer" containerID="61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573" Jan 30 05:11:01 crc kubenswrapper[4931]: E0130 05:11:01.790729 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573\": container with ID starting with 61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573 not found: ID does not exist" containerID="61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.790777 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573"} err="failed to get container status \"61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573\": rpc error: code = NotFound desc = could not find container \"61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573\": container with ID starting with 61e91ec5adf3b6d2e6c6bf351571b66e56de20029d70508816d60e2f718fa573 not found: ID does not exist" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.790808 4931 scope.go:117] "RemoveContainer" containerID="8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111" Jan 30 05:11:01 crc kubenswrapper[4931]: E0130 05:11:01.793685 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111\": container with ID starting with 8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111 not found: ID does not exist" containerID="8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.793717 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111"} err="failed to get container status \"8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111\": rpc error: code = NotFound desc = could not find container \"8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111\": container with ID starting with 8077360b2f317c6b05d19deec3953c16d0d53ccc821e4ebd6c5f208a32b8e111 not found: ID does not exist" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.793733 4931 scope.go:117] "RemoveContainer" containerID="95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836" Jan 30 05:11:01 crc kubenswrapper[4931]: E0130 05:11:01.794046 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836\": container with ID starting with 95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836 not found: ID does not exist" containerID="95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.794074 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836"} err="failed to get container status \"95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836\": rpc error: code = NotFound desc = could not find container \"95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836\": container with ID starting with 95a2f8458be6179195776e6c58dc301461d3257c1b29b85caedabd8aea064836 not found: ID does not exist" Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.801077 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mltbk"] Jan 30 05:11:01 crc kubenswrapper[4931]: I0130 05:11:01.804647 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mltbk"] Jan 30 05:11:02 crc kubenswrapper[4931]: I0130 05:11:02.703359 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jp5s" event={"ID":"9ac0e0dc-4375-4faf-a262-2cf4e9772a29","Type":"ContainerStarted","Data":"c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf"} Jan 30 05:11:02 crc kubenswrapper[4931]: I0130 05:11:02.706474 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64mf" event={"ID":"bb356dde-8435-471d-a260-8966eeb15eb3","Type":"ContainerStarted","Data":"be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2"} Jan 30 05:11:02 crc kubenswrapper[4931]: I0130 05:11:02.729190 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7jp5s" podStartSLOduration=2.613166089 podStartE2EDuration="46.729171291s" podCreationTimestamp="2026-01-30 05:10:16 +0000 UTC" firstStartedPulling="2026-01-30 05:10:18.106706187 +0000 UTC m=+153.476616444" lastFinishedPulling="2026-01-30 05:11:02.222711389 +0000 UTC m=+197.592621646" observedRunningTime="2026-01-30 05:11:02.725828791 +0000 UTC m=+198.095739048" watchObservedRunningTime="2026-01-30 05:11:02.729171291 +0000 UTC m=+198.099081548" Jan 30 05:11:02 crc kubenswrapper[4931]: I0130 05:11:02.747269 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z64mf" podStartSLOduration=2.8182336279999998 podStartE2EDuration="45.7472449s" podCreationTimestamp="2026-01-30 05:10:17 +0000 UTC" firstStartedPulling="2026-01-30 05:10:19.162842133 +0000 UTC m=+154.532752390" lastFinishedPulling="2026-01-30 05:11:02.091853405 +0000 UTC m=+197.461763662" observedRunningTime="2026-01-30 05:11:02.7448737 +0000 UTC m=+198.114783947" watchObservedRunningTime="2026-01-30 05:11:02.7472449 +0000 UTC m=+198.117155157" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.351692 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 05:11:03 crc kubenswrapper[4931]: E0130 05:11:03.352494 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="extract-content" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.352630 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="extract-content" Jan 30 05:11:03 crc kubenswrapper[4931]: E0130 05:11:03.352694 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="registry-server" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.352761 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="registry-server" Jan 30 05:11:03 crc kubenswrapper[4931]: E0130 05:11:03.352831 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2d9da0-86e1-4a44-a714-6ca3d2d32edc" containerName="pruner" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.352890 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2d9da0-86e1-4a44-a714-6ca3d2d32edc" containerName="pruner" Jan 30 05:11:03 crc kubenswrapper[4931]: E0130 05:11:03.352954 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="extract-utilities" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.353027 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="extract-utilities" Jan 30 05:11:03 crc kubenswrapper[4931]: E0130 05:11:03.353091 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="extract-content" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.353151 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="extract-content" Jan 30 05:11:03 crc kubenswrapper[4931]: E0130 05:11:03.353211 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="extract-utilities" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.353286 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="extract-utilities" Jan 30 05:11:03 crc kubenswrapper[4931]: E0130 05:11:03.353351 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="registry-server" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.353408 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="registry-server" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.353583 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" containerName="registry-server" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.353644 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e2d9da0-86e1-4a44-a714-6ca3d2d32edc" containerName="pruner" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.353704 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e99a4f-8956-424c-a4c6-7a67f9983cd0" containerName="registry-server" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.354203 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.361591 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.363491 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.369845 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.430653 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025f8209-dd2a-482c-8bb2-e0ad2a98a563" path="/var/lib/kubelet/pods/025f8209-dd2a-482c-8bb2-e0ad2a98a563/volumes" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.529808 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.530149 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-var-lock\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.530337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b6db44-5b56-401a-bbce-c9e55735350f-kube-api-access\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.631851 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.631927 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-var-lock\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.631957 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b6db44-5b56-401a-bbce-c9e55735350f-kube-api-access\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.632339 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.632377 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-var-lock\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.662242 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b6db44-5b56-401a-bbce-c9e55735350f-kube-api-access\") pod \"installer-9-crc\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.678894 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:03 crc kubenswrapper[4931]: I0130 05:11:03.930903 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 05:11:04 crc kubenswrapper[4931]: I0130 05:11:04.723892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"29b6db44-5b56-401a-bbce-c9e55735350f","Type":"ContainerStarted","Data":"5262c0a3d9bad98410b15e5334a833765b519d07d9825f5243288324f99b437e"} Jan 30 05:11:04 crc kubenswrapper[4931]: I0130 05:11:04.723977 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"29b6db44-5b56-401a-bbce-c9e55735350f","Type":"ContainerStarted","Data":"b157ac78dbfd45487650da2507a964b9ef37da369d371e13ba3722a6cc6cbd9b"} Jan 30 05:11:04 crc kubenswrapper[4931]: I0130 05:11:04.746505 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.746473344 podStartE2EDuration="1.746473344s" podCreationTimestamp="2026-01-30 05:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:11:04.744388921 +0000 UTC m=+200.114299218" watchObservedRunningTime="2026-01-30 05:11:04.746473344 +0000 UTC m=+200.116383641" Jan 30 05:11:04 crc kubenswrapper[4931]: I0130 05:11:04.786443 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:11:04 crc kubenswrapper[4931]: I0130 05:11:04.786515 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:11:04 crc kubenswrapper[4931]: I0130 05:11:04.869230 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:11:05 crc kubenswrapper[4931]: I0130 05:11:05.293135 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:11:05 crc kubenswrapper[4931]: I0130 05:11:05.293649 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:11:05 crc kubenswrapper[4931]: I0130 05:11:05.354220 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:11:06 crc kubenswrapper[4931]: I0130 05:11:06.928108 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:11:06 crc kubenswrapper[4931]: I0130 05:11:06.928180 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:11:07 crc kubenswrapper[4931]: I0130 05:11:07.009284 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:11:07 crc kubenswrapper[4931]: I0130 05:11:07.795413 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:11:07 crc kubenswrapper[4931]: I0130 05:11:07.898263 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:11:07 crc kubenswrapper[4931]: I0130 05:11:07.898657 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:11:08 crc kubenswrapper[4931]: I0130 05:11:08.284096 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:11:08 crc kubenswrapper[4931]: I0130 05:11:08.284298 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:11:08 crc kubenswrapper[4931]: I0130 05:11:08.338008 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:11:08 crc kubenswrapper[4931]: I0130 05:11:08.829893 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:11:08 crc kubenswrapper[4931]: I0130 05:11:08.942042 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z64mf" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="registry-server" probeResult="failure" output=< Jan 30 05:11:08 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 05:11:08 crc kubenswrapper[4931]: > Jan 30 05:11:10 crc kubenswrapper[4931]: I0130 05:11:09.999476 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56dq5"] Jan 30 05:11:11 crc kubenswrapper[4931]: I0130 05:11:11.787480 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56dq5" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="registry-server" containerID="cri-o://98f1a2d73272f9f16ee133a746115272f4fdf7cbb2973402df8b866824230ca2" gracePeriod=2 Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.826776 4931 generic.go:334] "Generic (PLEG): container finished" podID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerID="98f1a2d73272f9f16ee133a746115272f4fdf7cbb2973402df8b866824230ca2" exitCode=0 Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.826852 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56dq5" event={"ID":"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d","Type":"ContainerDied","Data":"98f1a2d73272f9f16ee133a746115272f4fdf7cbb2973402df8b866824230ca2"} Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.885099 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.919861 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dnh5\" (UniqueName: \"kubernetes.io/projected/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-kube-api-access-2dnh5\") pod \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.920025 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-utilities\") pod \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.924706 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-utilities" (OuterVolumeSpecName: "utilities") pod "7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" (UID: "7e9dd69f-1c2e-4b14-83f8-dff33fe2118d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.926782 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-catalog-content\") pod \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\" (UID: \"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d\") " Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.928712 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:13 crc kubenswrapper[4931]: I0130 05:11:13.938747 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-kube-api-access-2dnh5" (OuterVolumeSpecName: "kube-api-access-2dnh5") pod "7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" (UID: "7e9dd69f-1c2e-4b14-83f8-dff33fe2118d"). InnerVolumeSpecName "kube-api-access-2dnh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.030371 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dnh5\" (UniqueName: \"kubernetes.io/projected/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-kube-api-access-2dnh5\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.107222 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" (UID: "7e9dd69f-1c2e-4b14-83f8-dff33fe2118d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.132590 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.841493 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56dq5" event={"ID":"7e9dd69f-1c2e-4b14-83f8-dff33fe2118d","Type":"ContainerDied","Data":"54c769092679e1f74ddf76a97ce7a50ba4a68138b396c7ae26403fddbe513fc1"} Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.841681 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56dq5" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.842236 4931 scope.go:117] "RemoveContainer" containerID="98f1a2d73272f9f16ee133a746115272f4fdf7cbb2973402df8b866824230ca2" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.860489 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.880903 4931 scope.go:117] "RemoveContainer" containerID="6a25d2a944e2534b4ce44d83ba3e3a50d7e29e3ca3c70fb2a75b736f90d90dec" Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.905355 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56dq5"] Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.915266 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56dq5"] Jan 30 05:11:14 crc kubenswrapper[4931]: I0130 05:11:14.923605 4931 scope.go:117] "RemoveContainer" containerID="c23bf00de71e269e3c5c3d32d2b7e2842aa494dd5f905c0feee3ad4799d5aa22" Jan 30 05:11:15 crc kubenswrapper[4931]: I0130 05:11:15.360540 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:11:15 crc kubenswrapper[4931]: I0130 05:11:15.435856 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" path="/var/lib/kubelet/pods/7e9dd69f-1c2e-4b14-83f8-dff33fe2118d/volumes" Jan 30 05:11:17 crc kubenswrapper[4931]: I0130 05:11:17.793748 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pfl6d"] Jan 30 05:11:17 crc kubenswrapper[4931]: I0130 05:11:17.794411 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pfl6d" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="registry-server" containerID="cri-o://22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed" gracePeriod=2 Jan 30 05:11:17 crc kubenswrapper[4931]: I0130 05:11:17.961824 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.046093 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.233797 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.299060 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-catalog-content\") pod \"72ab8593-3b5e-421a-ac80-b85376b21ffe\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.299191 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn84w\" (UniqueName: \"kubernetes.io/projected/72ab8593-3b5e-421a-ac80-b85376b21ffe-kube-api-access-xn84w\") pod \"72ab8593-3b5e-421a-ac80-b85376b21ffe\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.299230 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-utilities\") pod \"72ab8593-3b5e-421a-ac80-b85376b21ffe\" (UID: \"72ab8593-3b5e-421a-ac80-b85376b21ffe\") " Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.300585 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-utilities" (OuterVolumeSpecName: "utilities") pod "72ab8593-3b5e-421a-ac80-b85376b21ffe" (UID: "72ab8593-3b5e-421a-ac80-b85376b21ffe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.307078 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ab8593-3b5e-421a-ac80-b85376b21ffe-kube-api-access-xn84w" (OuterVolumeSpecName: "kube-api-access-xn84w") pod "72ab8593-3b5e-421a-ac80-b85376b21ffe" (UID: "72ab8593-3b5e-421a-ac80-b85376b21ffe"). InnerVolumeSpecName "kube-api-access-xn84w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.350843 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72ab8593-3b5e-421a-ac80-b85376b21ffe" (UID: "72ab8593-3b5e-421a-ac80-b85376b21ffe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.401584 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn84w\" (UniqueName: \"kubernetes.io/projected/72ab8593-3b5e-421a-ac80-b85376b21ffe-kube-api-access-xn84w\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.401631 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.401644 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72ab8593-3b5e-421a-ac80-b85376b21ffe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.881017 4931 generic.go:334] "Generic (PLEG): container finished" podID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerID="22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed" exitCode=0 Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.881139 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pfl6d" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.881231 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfl6d" event={"ID":"72ab8593-3b5e-421a-ac80-b85376b21ffe","Type":"ContainerDied","Data":"22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed"} Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.881368 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pfl6d" event={"ID":"72ab8593-3b5e-421a-ac80-b85376b21ffe","Type":"ContainerDied","Data":"d2226ce036426e1065e325748a9372dea2501d4becb5598917d5e4c3d429e02b"} Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.881450 4931 scope.go:117] "RemoveContainer" containerID="22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.910549 4931 scope.go:117] "RemoveContainer" containerID="128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.936921 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pfl6d"] Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.944392 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pfl6d"] Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.963109 4931 scope.go:117] "RemoveContainer" containerID="bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.987487 4931 scope.go:117] "RemoveContainer" containerID="22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed" Jan 30 05:11:18 crc kubenswrapper[4931]: E0130 05:11:18.988325 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed\": container with ID starting with 22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed not found: ID does not exist" containerID="22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.988399 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed"} err="failed to get container status \"22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed\": rpc error: code = NotFound desc = could not find container \"22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed\": container with ID starting with 22e8d8be79fff3d0637bf7d16c30e3800da082978a27e4e29fb44d16521799ed not found: ID does not exist" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.988478 4931 scope.go:117] "RemoveContainer" containerID="128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e" Jan 30 05:11:18 crc kubenswrapper[4931]: E0130 05:11:18.990195 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e\": container with ID starting with 128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e not found: ID does not exist" containerID="128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.990243 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e"} err="failed to get container status \"128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e\": rpc error: code = NotFound desc = could not find container \"128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e\": container with ID starting with 128fe76504e70bb6392537003236cd2c16bd8f6afb75b2b9e7f6b3093e55a04e not found: ID does not exist" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.990275 4931 scope.go:117] "RemoveContainer" containerID="bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5" Jan 30 05:11:18 crc kubenswrapper[4931]: E0130 05:11:18.990842 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5\": container with ID starting with bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5 not found: ID does not exist" containerID="bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5" Jan 30 05:11:18 crc kubenswrapper[4931]: I0130 05:11:18.990881 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5"} err="failed to get container status \"bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5\": rpc error: code = NotFound desc = could not find container \"bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5\": container with ID starting with bd65a17d38c4081e883be3cbec7fcb60af6937440ed8104d6646abc57031b8d5 not found: ID does not exist" Jan 30 05:11:19 crc kubenswrapper[4931]: I0130 05:11:19.450595 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" path="/var/lib/kubelet/pods/72ab8593-3b5e-421a-ac80-b85376b21ffe/volumes" Jan 30 05:11:20 crc kubenswrapper[4931]: I0130 05:11:20.994142 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" podUID="45ceead9-96b4-4b3c-9fba-1288da84db97" containerName="oauth-openshift" containerID="cri-o://b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634" gracePeriod=15 Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.444931 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558654 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-cliconfig\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558737 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-idp-0-file-data\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558794 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-dir\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558855 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-policies\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558916 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-router-certs\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558924 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558955 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-ocp-branding-template\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.558995 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvmhv\" (UniqueName: \"kubernetes.io/projected/45ceead9-96b4-4b3c-9fba-1288da84db97-kube-api-access-gvmhv\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559037 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-serving-cert\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559077 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-error\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559173 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-login\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559598 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-service-ca\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559659 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-provider-selection\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559719 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-trusted-ca-bundle\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559759 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-session\") pod \"45ceead9-96b4-4b3c-9fba-1288da84db97\" (UID: \"45ceead9-96b4-4b3c-9fba-1288da84db97\") " Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.559858 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.560169 4931 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.560196 4931 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.560437 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.560760 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.561528 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.566842 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.566725 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.567018 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.567406 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.567840 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.568889 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.569449 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ceead9-96b4-4b3c-9fba-1288da84db97-kube-api-access-gvmhv" (OuterVolumeSpecName: "kube-api-access-gvmhv") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "kube-api-access-gvmhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.571069 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.579381 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "45ceead9-96b4-4b3c-9fba-1288da84db97" (UID: "45ceead9-96b4-4b3c-9fba-1288da84db97"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.661845 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.661963 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.661986 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvmhv\" (UniqueName: \"kubernetes.io/projected/45ceead9-96b4-4b3c-9fba-1288da84db97-kube-api-access-gvmhv\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662049 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662072 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662095 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662113 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662135 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662153 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662173 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662192 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.662211 4931 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/45ceead9-96b4-4b3c-9fba-1288da84db97-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.905714 4931 generic.go:334] "Generic (PLEG): container finished" podID="45ceead9-96b4-4b3c-9fba-1288da84db97" containerID="b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634" exitCode=0 Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.905863 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.906387 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" event={"ID":"45ceead9-96b4-4b3c-9fba-1288da84db97","Type":"ContainerDied","Data":"b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634"} Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.909224 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ww4ml" event={"ID":"45ceead9-96b4-4b3c-9fba-1288da84db97","Type":"ContainerDied","Data":"58f7af397c08f51e1fad13d7c31e06e26340ff1e4667e88288913594a1b1daca"} Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.909320 4931 scope.go:117] "RemoveContainer" containerID="b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.941548 4931 scope.go:117] "RemoveContainer" containerID="b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634" Jan 30 05:11:21 crc kubenswrapper[4931]: E0130 05:11:21.942286 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634\": container with ID starting with b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634 not found: ID does not exist" containerID="b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.942776 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634"} err="failed to get container status \"b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634\": rpc error: code = NotFound desc = could not find container \"b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634\": container with ID starting with b43d6b866734f28abac79825b9d740f660929182d58b93c5b171a5e5dbe71634 not found: ID does not exist" Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.961679 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ww4ml"] Jan 30 05:11:21 crc kubenswrapper[4931]: I0130 05:11:21.967835 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ww4ml"] Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.567966 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-cf4db658-jjpht"] Jan 30 05:11:22 crc kubenswrapper[4931]: E0130 05:11:22.568380 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="registry-server" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568403 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="registry-server" Jan 30 05:11:22 crc kubenswrapper[4931]: E0130 05:11:22.568459 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="extract-content" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568472 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="extract-content" Jan 30 05:11:22 crc kubenswrapper[4931]: E0130 05:11:22.568493 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="extract-utilities" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568507 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="extract-utilities" Jan 30 05:11:22 crc kubenswrapper[4931]: E0130 05:11:22.568528 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="registry-server" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568540 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="registry-server" Jan 30 05:11:22 crc kubenswrapper[4931]: E0130 05:11:22.568561 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="extract-utilities" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568576 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="extract-utilities" Jan 30 05:11:22 crc kubenswrapper[4931]: E0130 05:11:22.568602 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ceead9-96b4-4b3c-9fba-1288da84db97" containerName="oauth-openshift" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568614 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ceead9-96b4-4b3c-9fba-1288da84db97" containerName="oauth-openshift" Jan 30 05:11:22 crc kubenswrapper[4931]: E0130 05:11:22.568634 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="extract-content" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568646 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="extract-content" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568820 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ab8593-3b5e-421a-ac80-b85376b21ffe" containerName="registry-server" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568845 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ceead9-96b4-4b3c-9fba-1288da84db97" containerName="oauth-openshift" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.568874 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9dd69f-1c2e-4b14-83f8-dff33fe2118d" containerName="registry-server" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.569591 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.572336 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.583199 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.583454 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.583711 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.583838 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.584200 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.584478 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.584688 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.584874 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.584962 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.585833 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.590797 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cf4db658-jjpht"] Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.591343 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.594511 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.601561 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.623104 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.680876 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.681412 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-service-ca\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.681665 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-audit-policies\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.681844 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.682057 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-error\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.682333 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.682658 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-router-certs\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.682917 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.683194 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-session\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.683409 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-login\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.683845 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.684108 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.684395 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83887db-501f-4612-95c5-9874573e6cc3-audit-dir\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.684610 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xzp\" (UniqueName: \"kubernetes.io/projected/e83887db-501f-4612-95c5-9874573e6cc3-kube-api-access-g7xzp\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786452 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-login\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786565 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786611 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786656 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83887db-501f-4612-95c5-9874573e6cc3-audit-dir\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786699 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xzp\" (UniqueName: \"kubernetes.io/projected/e83887db-501f-4612-95c5-9874573e6cc3-kube-api-access-g7xzp\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786749 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786793 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-service-ca\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786829 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-audit-policies\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786861 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.786896 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-error\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.788132 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.788679 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-router-certs\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.788751 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.788811 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-session\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.789049 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.789977 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-audit-policies\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.790751 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83887db-501f-4612-95c5-9874573e6cc3-audit-dir\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.792331 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-service-ca\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.794858 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-login\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.796503 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.798612 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.801212 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-router-certs\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.803398 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.805955 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.813999 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-session\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.814826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.816910 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83887db-501f-4612-95c5-9874573e6cc3-v4-0-config-user-template-error\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.821396 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xzp\" (UniqueName: \"kubernetes.io/projected/e83887db-501f-4612-95c5-9874573e6cc3-kube-api-access-g7xzp\") pod \"oauth-openshift-cf4db658-jjpht\" (UID: \"e83887db-501f-4612-95c5-9874573e6cc3\") " pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:22 crc kubenswrapper[4931]: I0130 05:11:22.935595 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:23 crc kubenswrapper[4931]: I0130 05:11:23.213136 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cf4db658-jjpht"] Jan 30 05:11:23 crc kubenswrapper[4931]: I0130 05:11:23.438711 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ceead9-96b4-4b3c-9fba-1288da84db97" path="/var/lib/kubelet/pods/45ceead9-96b4-4b3c-9fba-1288da84db97/volumes" Jan 30 05:11:23 crc kubenswrapper[4931]: I0130 05:11:23.934035 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" event={"ID":"e83887db-501f-4612-95c5-9874573e6cc3","Type":"ContainerStarted","Data":"63363c9fcfcece35928baa1bb7d981f575d30998580b5dc3d8e5d377e14ef296"} Jan 30 05:11:23 crc kubenswrapper[4931]: I0130 05:11:23.934132 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" event={"ID":"e83887db-501f-4612-95c5-9874573e6cc3","Type":"ContainerStarted","Data":"5af29856727027285565767cd30400008f764d6832bb8e9c4c49b6d43b6e12f2"} Jan 30 05:11:23 crc kubenswrapper[4931]: I0130 05:11:23.934481 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:23 crc kubenswrapper[4931]: I0130 05:11:23.963315 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" podStartSLOduration=28.963289058 podStartE2EDuration="28.963289058s" podCreationTimestamp="2026-01-30 05:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:11:23.962659949 +0000 UTC m=+219.332570296" watchObservedRunningTime="2026-01-30 05:11:23.963289058 +0000 UTC m=+219.333199325" Jan 30 05:11:23 crc kubenswrapper[4931]: I0130 05:11:23.979706 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-cf4db658-jjpht" Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.363823 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.364413 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.365179 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.366078 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.366202 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96" gracePeriod=600 Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.969105 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96" exitCode=0 Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.969231 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96"} Jan 30 05:11:27 crc kubenswrapper[4931]: I0130 05:11:27.969530 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"595c92fb582df913939826c54f51177c31890e24d0ac56595342acf9749b06a1"} Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.143753 4931 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.145177 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.145279 4931 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.145703 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7" gracePeriod=15 Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.145727 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409" gracePeriod=15 Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.145703 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f" gracePeriod=15 Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.145816 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944" gracePeriod=15 Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.145762 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9" gracePeriod=15 Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.148574 4931 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.148940 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.148965 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.148988 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.148998 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.149017 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149039 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.149069 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149079 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.149102 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149112 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.149130 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149141 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.149155 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149168 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149345 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149361 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149376 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149391 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149401 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.149415 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.186695 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.351782 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.351871 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.351897 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.351927 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.351951 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.351990 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.352010 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.352033 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.456941 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457397 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457488 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457440 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457558 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457596 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457646 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457674 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457732 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.457792 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.458317 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.458445 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.459006 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.459047 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.459079 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.459112 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: I0130 05:11:42.481121 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:11:42 crc kubenswrapper[4931]: W0130 05:11:42.501656 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-260d401199a59151ab333ad8bbddcad4345a2392b184ac8ca482b074c3016a0d WatchSource:0}: Error finding container 260d401199a59151ab333ad8bbddcad4345a2392b184ac8ca482b074c3016a0d: Status 404 returned error can't find the container with id 260d401199a59151ab333ad8bbddcad4345a2392b184ac8ca482b074c3016a0d Jan 30 05:11:42 crc kubenswrapper[4931]: E0130 05:11:42.505103 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f6a28e8089b33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 05:11:42.504008499 +0000 UTC m=+237.873918766,LastTimestamp:2026-01-30 05:11:42.504008499 +0000 UTC m=+237.873918766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.105564 4931 generic.go:334] "Generic (PLEG): container finished" podID="29b6db44-5b56-401a-bbce-c9e55735350f" containerID="5262c0a3d9bad98410b15e5334a833765b519d07d9825f5243288324f99b437e" exitCode=0 Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.105685 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"29b6db44-5b56-401a-bbce-c9e55735350f","Type":"ContainerDied","Data":"5262c0a3d9bad98410b15e5334a833765b519d07d9825f5243288324f99b437e"} Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.106888 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.107492 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.107987 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.109725 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.111732 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.113051 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7" exitCode=0 Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.113098 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944" exitCode=0 Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.113115 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9" exitCode=0 Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.113136 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409" exitCode=2 Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.113268 4931 scope.go:117] "RemoveContainer" containerID="13ede20d216824faa7a2228b01dd1e9d71f80c906010c828d7193df2a55e5b64" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.115863 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31"} Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.115927 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"260d401199a59151ab333ad8bbddcad4345a2392b184ac8ca482b074c3016a0d"} Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.117001 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.117525 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:43 crc kubenswrapper[4931]: I0130 05:11:43.118013 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.130960 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 05:11:44 crc kubenswrapper[4931]: E0130 05:11:44.140045 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f6a28e8089b33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 05:11:42.504008499 +0000 UTC m=+237.873918766,LastTimestamp:2026-01-30 05:11:42.504008499 +0000 UTC m=+237.873918766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.494942 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.495776 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.495962 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.605265 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b6db44-5b56-401a-bbce-c9e55735350f-kube-api-access\") pod \"29b6db44-5b56-401a-bbce-c9e55735350f\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.605402 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-var-lock\") pod \"29b6db44-5b56-401a-bbce-c9e55735350f\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.605461 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-kubelet-dir\") pod \"29b6db44-5b56-401a-bbce-c9e55735350f\" (UID: \"29b6db44-5b56-401a-bbce-c9e55735350f\") " Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.605895 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "29b6db44-5b56-401a-bbce-c9e55735350f" (UID: "29b6db44-5b56-401a-bbce-c9e55735350f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.605930 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-var-lock" (OuterVolumeSpecName: "var-lock") pod "29b6db44-5b56-401a-bbce-c9e55735350f" (UID: "29b6db44-5b56-401a-bbce-c9e55735350f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.612395 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b6db44-5b56-401a-bbce-c9e55735350f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "29b6db44-5b56-401a-bbce-c9e55735350f" (UID: "29b6db44-5b56-401a-bbce-c9e55735350f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.707414 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b6db44-5b56-401a-bbce-c9e55735350f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.707865 4931 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.707881 4931 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b6db44-5b56-401a-bbce-c9e55735350f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.750451 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.751376 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.752258 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.753202 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.754000 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.911116 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.911354 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.911470 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.911772 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.911786 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.911902 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.912142 4931 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.912183 4931 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:44 crc kubenswrapper[4931]: I0130 05:11:44.912201 4931 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.143511 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"29b6db44-5b56-401a-bbce-c9e55735350f","Type":"ContainerDied","Data":"b157ac78dbfd45487650da2507a964b9ef37da369d371e13ba3722a6cc6cbd9b"} Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.143604 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b157ac78dbfd45487650da2507a964b9ef37da369d371e13ba3722a6cc6cbd9b" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.143612 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.148898 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.150406 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f" exitCode=0 Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.150551 4931 scope.go:117] "RemoveContainer" containerID="e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.150655 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.182528 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.182881 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.183185 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.183806 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.188373 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.188968 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.194148 4931 scope.go:117] "RemoveContainer" containerID="6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.222235 4931 scope.go:117] "RemoveContainer" containerID="d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.243645 4931 scope.go:117] "RemoveContainer" containerID="f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.268411 4931 scope.go:117] "RemoveContainer" containerID="9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.289704 4931 scope.go:117] "RemoveContainer" containerID="48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.318482 4931 scope.go:117] "RemoveContainer" containerID="e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.319013 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\": container with ID starting with e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7 not found: ID does not exist" containerID="e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.319080 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7"} err="failed to get container status \"e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\": rpc error: code = NotFound desc = could not find container \"e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7\": container with ID starting with e28329c62307fafa9b09943c159da3008d3f43384301bd491ed17162224c7ef7 not found: ID does not exist" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.319122 4931 scope.go:117] "RemoveContainer" containerID="6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.319674 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\": container with ID starting with 6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944 not found: ID does not exist" containerID="6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.319704 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944"} err="failed to get container status \"6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\": rpc error: code = NotFound desc = could not find container \"6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944\": container with ID starting with 6ee6cf7ac43650602c4ebd09f855d56194bb78d999d693e3022fabc1856f2944 not found: ID does not exist" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.319726 4931 scope.go:117] "RemoveContainer" containerID="d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.320013 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\": container with ID starting with d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9 not found: ID does not exist" containerID="d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.320047 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9"} err="failed to get container status \"d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\": rpc error: code = NotFound desc = could not find container \"d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9\": container with ID starting with d4be4117fa91b6e5b03c1eaf39913d3ab5e17bd5f03e348e073edbab64a4def9 not found: ID does not exist" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.320064 4931 scope.go:117] "RemoveContainer" containerID="f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.320688 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\": container with ID starting with f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409 not found: ID does not exist" containerID="f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.320711 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409"} err="failed to get container status \"f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\": rpc error: code = NotFound desc = could not find container \"f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409\": container with ID starting with f526a8c83c6810286f460546da2c82bf646dc9ce8399a0dc632d3a3c311e4409 not found: ID does not exist" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.320733 4931 scope.go:117] "RemoveContainer" containerID="9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.321089 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\": container with ID starting with 9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f not found: ID does not exist" containerID="9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.321125 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f"} err="failed to get container status \"9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\": rpc error: code = NotFound desc = could not find container \"9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f\": container with ID starting with 9630e3419badc64acf0a6412c00e5bda59081ec55bb617498afeb6f03213693f not found: ID does not exist" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.321149 4931 scope.go:117] "RemoveContainer" containerID="48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.321700 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\": container with ID starting with 48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0 not found: ID does not exist" containerID="48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.321764 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0"} err="failed to get container status \"48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\": rpc error: code = NotFound desc = could not find container \"48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0\": container with ID starting with 48216451deed22ce5ed0c8ff3066c96a5b30134cf1b04c1f2f4b9d616a3d2fb0 not found: ID does not exist" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.431605 4931 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.432365 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.433006 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.438462 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.980917 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.981249 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.981757 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.981988 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.982658 4931 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:45 crc kubenswrapper[4931]: I0130 05:11:45.982724 4931 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 05:11:45 crc kubenswrapper[4931]: E0130 05:11:45.983054 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="200ms" Jan 30 05:11:46 crc kubenswrapper[4931]: E0130 05:11:46.184616 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="400ms" Jan 30 05:11:46 crc kubenswrapper[4931]: E0130 05:11:46.586684 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="800ms" Jan 30 05:11:47 crc kubenswrapper[4931]: E0130 05:11:47.388072 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="1.6s" Jan 30 05:11:48 crc kubenswrapper[4931]: E0130 05:11:48.989370 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="3.2s" Jan 30 05:11:52 crc kubenswrapper[4931]: E0130 05:11:52.191171 4931 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.179:6443: connect: connection refused" interval="6.4s" Jan 30 05:11:54 crc kubenswrapper[4931]: E0130 05:11:54.142109 4931 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.179:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f6a28e8089b33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 05:11:42.504008499 +0000 UTC m=+237.873918766,LastTimestamp:2026-01-30 05:11:42.504008499 +0000 UTC m=+237.873918766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 05:11:54 crc kubenswrapper[4931]: I0130 05:11:54.421811 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:54 crc kubenswrapper[4931]: I0130 05:11:54.422911 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:54 crc kubenswrapper[4931]: I0130 05:11:54.423519 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:54 crc kubenswrapper[4931]: I0130 05:11:54.452193 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:11:54 crc kubenswrapper[4931]: I0130 05:11:54.452254 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:11:54 crc kubenswrapper[4931]: E0130 05:11:54.453078 4931 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:54 crc kubenswrapper[4931]: I0130 05:11:54.453813 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:54 crc kubenswrapper[4931]: W0130 05:11:54.492344 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-77fc260e49c0241d02705c8f7f91523b6d95325b47a95fb3e424c2c9656b0812 WatchSource:0}: Error finding container 77fc260e49c0241d02705c8f7f91523b6d95325b47a95fb3e424c2c9656b0812: Status 404 returned error can't find the container with id 77fc260e49c0241d02705c8f7f91523b6d95325b47a95fb3e424c2c9656b0812 Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.229921 4931 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="eaf476f9566098a1a87403e025cdcd05160001f809e4cb9298ae59d5aa8b2ff1" exitCode=0 Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.230041 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"eaf476f9566098a1a87403e025cdcd05160001f809e4cb9298ae59d5aa8b2ff1"} Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.230534 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"77fc260e49c0241d02705c8f7f91523b6d95325b47a95fb3e424c2c9656b0812"} Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.231000 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.231026 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:11:55 crc kubenswrapper[4931]: E0130 05:11:55.231736 4931 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.231714 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.232514 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.434755 4931 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.435339 4931 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:55 crc kubenswrapper[4931]: I0130 05:11:55.435901 4931 status_manager.go:851] "Failed to get status for pod" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.179:6443: connect: connection refused" Jan 30 05:11:56 crc kubenswrapper[4931]: I0130 05:11:56.242809 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 05:11:56 crc kubenswrapper[4931]: I0130 05:11:56.243165 4931 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07" exitCode=1 Jan 30 05:11:56 crc kubenswrapper[4931]: I0130 05:11:56.243212 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07"} Jan 30 05:11:56 crc kubenswrapper[4931]: I0130 05:11:56.243927 4931 scope.go:117] "RemoveContainer" containerID="1c8fdc468479530a4757f1b448a63857d56a301237824e629ebffeb32cc50f07" Jan 30 05:11:56 crc kubenswrapper[4931]: I0130 05:11:56.246605 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d932b05539c781d68b8d49d05ebdf07debc04383eacefa6ca55f030bb477fc32"} Jan 30 05:11:56 crc kubenswrapper[4931]: I0130 05:11:56.246630 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"10ca6f3e71babbe5496ce5b48f0b41a5430db77a42c693fa1e50ac3da13ff27b"} Jan 30 05:11:56 crc kubenswrapper[4931]: I0130 05:11:56.246641 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f281df23b1d0c78f24af79910a223828a369ad98d97db182f94754867ac781d2"} Jan 30 05:11:57 crc kubenswrapper[4931]: I0130 05:11:57.254721 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 05:11:57 crc kubenswrapper[4931]: I0130 05:11:57.254863 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8121e5a67bd8dca4f25f67bd0f1fbc5baa8e67403010ca8bc1bfb2df11c4e424"} Jan 30 05:11:57 crc kubenswrapper[4931]: I0130 05:11:57.258621 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"961c4f79c9ec6746d30d5d3103df941a0dcd65cbbc6f5cb13c234d98200a880c"} Jan 30 05:11:57 crc kubenswrapper[4931]: I0130 05:11:57.258668 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"43de9f31da3eb1eb0f205fd151d19d7acf071584c8c06a8e003134dc019fe428"} Jan 30 05:11:57 crc kubenswrapper[4931]: I0130 05:11:57.258895 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:57 crc kubenswrapper[4931]: I0130 05:11:57.259006 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:11:57 crc kubenswrapper[4931]: I0130 05:11:57.259050 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:11:59 crc kubenswrapper[4931]: I0130 05:11:59.454580 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:59 crc kubenswrapper[4931]: I0130 05:11:59.454952 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:11:59 crc kubenswrapper[4931]: I0130 05:11:59.469294 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:12:00 crc kubenswrapper[4931]: I0130 05:12:00.461297 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:12:00 crc kubenswrapper[4931]: I0130 05:12:00.470204 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:12:01 crc kubenswrapper[4931]: I0130 05:12:01.299079 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:12:02 crc kubenswrapper[4931]: I0130 05:12:02.268715 4931 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:12:02 crc kubenswrapper[4931]: I0130 05:12:02.305386 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:12:02 crc kubenswrapper[4931]: I0130 05:12:02.305456 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:12:02 crc kubenswrapper[4931]: I0130 05:12:02.309583 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:12:02 crc kubenswrapper[4931]: I0130 05:12:02.311922 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="596fb761-2061-43fb-bcf3-41e724e78d86" Jan 30 05:12:03 crc kubenswrapper[4931]: I0130 05:12:03.313705 4931 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:12:03 crc kubenswrapper[4931]: I0130 05:12:03.314243 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f92025d1-3392-4c42-802e-b549f0bf4e7f" Jan 30 05:12:05 crc kubenswrapper[4931]: I0130 05:12:05.443466 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="596fb761-2061-43fb-bcf3-41e724e78d86" Jan 30 05:12:11 crc kubenswrapper[4931]: I0130 05:12:11.420335 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 05:12:11 crc kubenswrapper[4931]: I0130 05:12:11.670754 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 05:12:12 crc kubenswrapper[4931]: I0130 05:12:12.382375 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 05:12:12 crc kubenswrapper[4931]: I0130 05:12:12.445093 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.289274 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.508117 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.815298 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.868310 4931 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.876144 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=31.876112647 podStartE2EDuration="31.876112647s" podCreationTimestamp="2026-01-30 05:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:12:02.090851541 +0000 UTC m=+257.460761828" watchObservedRunningTime="2026-01-30 05:12:13.876112647 +0000 UTC m=+269.246022944" Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.877410 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.877542 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.886124 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 05:12:13 crc kubenswrapper[4931]: I0130 05:12:13.912380 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=11.912345775 podStartE2EDuration="11.912345775s" podCreationTimestamp="2026-01-30 05:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:12:13.904674933 +0000 UTC m=+269.274585230" watchObservedRunningTime="2026-01-30 05:12:13.912345775 +0000 UTC m=+269.282256062" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.006519 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.191326 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.193789 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.270581 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.613588 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.671248 4931 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.740707 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.770416 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 05:12:14 crc kubenswrapper[4931]: I0130 05:12:14.842915 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.033943 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.042632 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.159235 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.324598 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.347219 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.390208 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.468663 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.714285 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.770489 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.781737 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.893017 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 05:12:15 crc kubenswrapper[4931]: I0130 05:12:15.901930 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.146006 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.275808 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.362809 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.369609 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.424208 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.719667 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.748621 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.777603 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.793595 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.807841 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.821234 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.893033 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 05:12:16 crc kubenswrapper[4931]: I0130 05:12:16.896403 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.149381 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.251647 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.283412 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.468066 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.500418 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.555910 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.608331 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.610183 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.637325 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.655504 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.680471 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.734136 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 05:12:17 crc kubenswrapper[4931]: I0130 05:12:17.976878 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.002737 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.024369 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.107002 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.122898 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.168183 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.185498 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.340941 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.377016 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.377513 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.456042 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.461973 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.527805 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.601625 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.626382 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.647559 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.738261 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.822223 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.872915 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.902788 4931 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 05:12:18 crc kubenswrapper[4931]: I0130 05:12:18.974313 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.024746 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.115784 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.169832 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.218660 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.225360 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.306388 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.498466 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.498815 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.520525 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.569823 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.590995 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.641106 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.669694 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.689371 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.721821 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.743616 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.776603 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.793583 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.804376 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.846197 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.861281 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 05:12:19 crc kubenswrapper[4931]: I0130 05:12:19.958713 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.049264 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.062611 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.151237 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.190198 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.382379 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.389567 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.427504 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.428998 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.444677 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.493280 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.575079 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.577372 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.713969 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.714023 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.889740 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 05:12:20 crc kubenswrapper[4931]: I0130 05:12:20.897321 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.151237 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.188220 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.251181 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.333721 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.460481 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.470683 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.478021 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.512479 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.585253 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.596440 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.675240 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.700602 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.715032 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.751955 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.767016 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.797103 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.813678 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.846911 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.856830 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.885644 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.953511 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.954848 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.962177 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 05:12:21 crc kubenswrapper[4931]: I0130 05:12:21.982314 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.080394 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.095149 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.100190 4931 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.127081 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.130173 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.167206 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.224011 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.250377 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.266272 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.359693 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.500682 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.547816 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.661753 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.663042 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.842885 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.857258 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.932086 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.955328 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 05:12:22 crc kubenswrapper[4931]: I0130 05:12:22.964284 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.069733 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.158965 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.165164 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.232746 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.330575 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.362255 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.398717 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.431109 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.483162 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.494926 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.669954 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.670448 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.705842 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.787131 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.849627 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.859998 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.922004 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.938193 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.939896 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 05:12:23 crc kubenswrapper[4931]: I0130 05:12:23.991350 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.069957 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.076507 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.190002 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.206162 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.294535 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.413151 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.431697 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.503487 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.595025 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.640452 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.726655 4931 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.783664 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.795172 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.846029 4931 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.846556 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31" gracePeriod=5 Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.865896 4931 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.911239 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.915507 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.923795 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 05:12:24 crc kubenswrapper[4931]: I0130 05:12:24.945245 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.001200 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.171691 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.207806 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.258805 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.289336 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.425688 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.612465 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.695786 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.761077 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.806714 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.835919 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.876786 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.952814 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 05:12:25 crc kubenswrapper[4931]: I0130 05:12:25.979448 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.049447 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.094749 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.097661 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.178897 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.214560 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.328900 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.396146 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.407663 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.471532 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.489932 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.535390 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.651896 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.703501 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.723025 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-frnwj"] Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.723408 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-frnwj" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="registry-server" containerID="cri-o://df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca" gracePeriod=30 Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.724411 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.745178 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5fcn"] Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.745767 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5fcn" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="registry-server" containerID="cri-o://6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa" gracePeriod=30 Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.760786 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phq4q"] Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.761102 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" podUID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" containerName="marketplace-operator" containerID="cri-o://dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935" gracePeriod=30 Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.767364 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.769133 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jp5s"] Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.769772 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7jp5s" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="registry-server" containerID="cri-o://c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" gracePeriod=30 Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.775122 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z64mf"] Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.775563 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z64mf" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="registry-server" containerID="cri-o://be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2" gracePeriod=30 Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.798037 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ng75v"] Jan 30 05:12:26 crc kubenswrapper[4931]: E0130 05:12:26.798354 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" containerName="installer" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.798372 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" containerName="installer" Jan 30 05:12:26 crc kubenswrapper[4931]: E0130 05:12:26.798481 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.798496 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.798672 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.798692 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b6db44-5b56-401a-bbce-c9e55735350f" containerName="installer" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.799481 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.874894 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.894093 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.900735 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.906040 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 05:12:26 crc kubenswrapper[4931]: E0130 05:12:26.928807 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf is running failed: container process not found" containerID="c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 05:12:26 crc kubenswrapper[4931]: E0130 05:12:26.929579 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf is running failed: container process not found" containerID="c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 05:12:26 crc kubenswrapper[4931]: E0130 05:12:26.930141 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf is running failed: container process not found" containerID="c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 05:12:26 crc kubenswrapper[4931]: E0130 05:12:26.930213 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-7jp5s" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="registry-server" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.936699 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx24p\" (UniqueName: \"kubernetes.io/projected/29014adb-d772-451f-b4bf-9fdb5d417d1e-kube-api-access-fx24p\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.936778 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29014adb-d772-451f-b4bf-9fdb5d417d1e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:26 crc kubenswrapper[4931]: I0130 05:12:26.936805 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29014adb-d772-451f-b4bf-9fdb5d417d1e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.023655 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.038471 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx24p\" (UniqueName: \"kubernetes.io/projected/29014adb-d772-451f-b4bf-9fdb5d417d1e-kube-api-access-fx24p\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.038535 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29014adb-d772-451f-b4bf-9fdb5d417d1e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.038554 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29014adb-d772-451f-b4bf-9fdb5d417d1e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.043474 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29014adb-d772-451f-b4bf-9fdb5d417d1e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.047993 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29014adb-d772-451f-b4bf-9fdb5d417d1e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.059779 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.072365 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx24p\" (UniqueName: \"kubernetes.io/projected/29014adb-d772-451f-b4bf-9fdb5d417d1e-kube-api-access-fx24p\") pod \"marketplace-operator-79b997595-ng75v\" (UID: \"29014adb-d772-451f-b4bf-9fdb5d417d1e\") " pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.182670 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.189915 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.193349 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.221174 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.226286 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.234760 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.252036 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.266054 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.280682 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.299627 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.344534 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg6xd\" (UniqueName: \"kubernetes.io/projected/9163b44e-4aa5-422c-a2fd-55747c8d506e-kube-api-access-zg6xd\") pod \"9163b44e-4aa5-422c-a2fd-55747c8d506e\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.344666 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-utilities\") pod \"9163b44e-4aa5-422c-a2fd-55747c8d506e\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.344764 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-utilities\") pod \"0dbdc3df-7306-41e4-93c6-d7d27d481789\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.344806 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r28cq\" (UniqueName: \"kubernetes.io/projected/bc314d0c-da50-4607-93e1-5bece9c3b2b1-kube-api-access-r28cq\") pod \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.344856 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-catalog-content\") pod \"bb356dde-8435-471d-a260-8966eeb15eb3\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.344907 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gdhb\" (UniqueName: \"kubernetes.io/projected/bb356dde-8435-471d-a260-8966eeb15eb3-kube-api-access-9gdhb\") pod \"bb356dde-8435-471d-a260-8966eeb15eb3\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.344959 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-catalog-content\") pod \"9163b44e-4aa5-422c-a2fd-55747c8d506e\" (UID: \"9163b44e-4aa5-422c-a2fd-55747c8d506e\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.345000 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-utilities\") pod \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.345032 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-catalog-content\") pod \"0dbdc3df-7306-41e4-93c6-d7d27d481789\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.345979 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-catalog-content\") pod \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.346028 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-trusted-ca\") pod \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.346094 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6cpz\" (UniqueName: \"kubernetes.io/projected/0dbdc3df-7306-41e4-93c6-d7d27d481789-kube-api-access-z6cpz\") pod \"0dbdc3df-7306-41e4-93c6-d7d27d481789\" (UID: \"0dbdc3df-7306-41e4-93c6-d7d27d481789\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.346119 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8xcp\" (UniqueName: \"kubernetes.io/projected/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-kube-api-access-d8xcp\") pod \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\" (UID: \"9ac0e0dc-4375-4faf-a262-2cf4e9772a29\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.346152 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-utilities\") pod \"bb356dde-8435-471d-a260-8966eeb15eb3\" (UID: \"bb356dde-8435-471d-a260-8966eeb15eb3\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.346185 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-operator-metrics\") pod \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\" (UID: \"bc314d0c-da50-4607-93e1-5bece9c3b2b1\") " Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.346294 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-utilities" (OuterVolumeSpecName: "utilities") pod "9ac0e0dc-4375-4faf-a262-2cf4e9772a29" (UID: "9ac0e0dc-4375-4faf-a262-2cf4e9772a29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.346545 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.347022 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-utilities" (OuterVolumeSpecName: "utilities") pod "0dbdc3df-7306-41e4-93c6-d7d27d481789" (UID: "0dbdc3df-7306-41e4-93c6-d7d27d481789"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.349054 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-utilities" (OuterVolumeSpecName: "utilities") pod "bb356dde-8435-471d-a260-8966eeb15eb3" (UID: "bb356dde-8435-471d-a260-8966eeb15eb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.349557 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "bc314d0c-da50-4607-93e1-5bece9c3b2b1" (UID: "bc314d0c-da50-4607-93e1-5bece9c3b2b1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.350698 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-utilities" (OuterVolumeSpecName: "utilities") pod "9163b44e-4aa5-422c-a2fd-55747c8d506e" (UID: "9163b44e-4aa5-422c-a2fd-55747c8d506e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.355348 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "bc314d0c-da50-4607-93e1-5bece9c3b2b1" (UID: "bc314d0c-da50-4607-93e1-5bece9c3b2b1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.355387 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-kube-api-access-d8xcp" (OuterVolumeSpecName: "kube-api-access-d8xcp") pod "9ac0e0dc-4375-4faf-a262-2cf4e9772a29" (UID: "9ac0e0dc-4375-4faf-a262-2cf4e9772a29"). InnerVolumeSpecName "kube-api-access-d8xcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.355413 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9163b44e-4aa5-422c-a2fd-55747c8d506e-kube-api-access-zg6xd" (OuterVolumeSpecName: "kube-api-access-zg6xd") pod "9163b44e-4aa5-422c-a2fd-55747c8d506e" (UID: "9163b44e-4aa5-422c-a2fd-55747c8d506e"). InnerVolumeSpecName "kube-api-access-zg6xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.355473 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc314d0c-da50-4607-93e1-5bece9c3b2b1-kube-api-access-r28cq" (OuterVolumeSpecName: "kube-api-access-r28cq") pod "bc314d0c-da50-4607-93e1-5bece9c3b2b1" (UID: "bc314d0c-da50-4607-93e1-5bece9c3b2b1"). InnerVolumeSpecName "kube-api-access-r28cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.355496 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb356dde-8435-471d-a260-8966eeb15eb3-kube-api-access-9gdhb" (OuterVolumeSpecName: "kube-api-access-9gdhb") pod "bb356dde-8435-471d-a260-8966eeb15eb3" (UID: "bb356dde-8435-471d-a260-8966eeb15eb3"). InnerVolumeSpecName "kube-api-access-9gdhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.367012 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbdc3df-7306-41e4-93c6-d7d27d481789-kube-api-access-z6cpz" (OuterVolumeSpecName: "kube-api-access-z6cpz") pod "0dbdc3df-7306-41e4-93c6-d7d27d481789" (UID: "0dbdc3df-7306-41e4-93c6-d7d27d481789"). InnerVolumeSpecName "kube-api-access-z6cpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.384901 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ac0e0dc-4375-4faf-a262-2cf4e9772a29" (UID: "9ac0e0dc-4375-4faf-a262-2cf4e9772a29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.410121 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9163b44e-4aa5-422c-a2fd-55747c8d506e" (UID: "9163b44e-4aa5-422c-a2fd-55747c8d506e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.424852 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dbdc3df-7306-41e4-93c6-d7d27d481789" (UID: "0dbdc3df-7306-41e4-93c6-d7d27d481789"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447291 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447445 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r28cq\" (UniqueName: \"kubernetes.io/projected/bc314d0c-da50-4607-93e1-5bece9c3b2b1-kube-api-access-r28cq\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447521 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gdhb\" (UniqueName: \"kubernetes.io/projected/bb356dde-8435-471d-a260-8966eeb15eb3-kube-api-access-9gdhb\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447583 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447653 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447713 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dbdc3df-7306-41e4-93c6-d7d27d481789-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447780 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447838 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8xcp\" (UniqueName: \"kubernetes.io/projected/9ac0e0dc-4375-4faf-a262-2cf4e9772a29-kube-api-access-d8xcp\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447906 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6cpz\" (UniqueName: \"kubernetes.io/projected/0dbdc3df-7306-41e4-93c6-d7d27d481789-kube-api-access-z6cpz\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.447967 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.448024 4931 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bc314d0c-da50-4607-93e1-5bece9c3b2b1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.448082 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg6xd\" (UniqueName: \"kubernetes.io/projected/9163b44e-4aa5-422c-a2fd-55747c8d506e-kube-api-access-zg6xd\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.448146 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9163b44e-4aa5-422c-a2fd-55747c8d506e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.462189 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.494011 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb356dde-8435-471d-a260-8966eeb15eb3" (UID: "bb356dde-8435-471d-a260-8966eeb15eb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.504123 4931 generic.go:334] "Generic (PLEG): container finished" podID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerID="df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca" exitCode=0 Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.504216 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frnwj" event={"ID":"0dbdc3df-7306-41e4-93c6-d7d27d481789","Type":"ContainerDied","Data":"df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.504248 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frnwj" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.504405 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frnwj" event={"ID":"0dbdc3df-7306-41e4-93c6-d7d27d481789","Type":"ContainerDied","Data":"8fc2bd9106d95cb2212067bb79c5743a637b67855826a61a2a9690fea3308441"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.504985 4931 scope.go:117] "RemoveContainer" containerID="df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.509879 4931 generic.go:334] "Generic (PLEG): container finished" podID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerID="6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa" exitCode=0 Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.509961 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5fcn" event={"ID":"9163b44e-4aa5-422c-a2fd-55747c8d506e","Type":"ContainerDied","Data":"6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.509983 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5fcn" event={"ID":"9163b44e-4aa5-422c-a2fd-55747c8d506e","Type":"ContainerDied","Data":"072fd632153afb9e250c4d51854168ac5eaa4674e8d0bff4bbfe11fe55d97dbc"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.510159 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5fcn" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.512318 4931 generic.go:334] "Generic (PLEG): container finished" podID="bb356dde-8435-471d-a260-8966eeb15eb3" containerID="be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2" exitCode=0 Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.512368 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64mf" event={"ID":"bb356dde-8435-471d-a260-8966eeb15eb3","Type":"ContainerDied","Data":"be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.512386 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z64mf" event={"ID":"bb356dde-8435-471d-a260-8966eeb15eb3","Type":"ContainerDied","Data":"a6a9276eab6557cd642ac08c2583f1c3b08c9bbb62478c22c66b2f818922633b"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.512465 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z64mf" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.517148 4931 generic.go:334] "Generic (PLEG): container finished" podID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" containerID="dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935" exitCode=0 Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.517215 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" event={"ID":"bc314d0c-da50-4607-93e1-5bece9c3b2b1","Type":"ContainerDied","Data":"dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.517234 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" event={"ID":"bc314d0c-da50-4607-93e1-5bece9c3b2b1","Type":"ContainerDied","Data":"b8dffc3066e9941e3da7e55a7eddcae34aa88188f6b968755e41658b1568e4e5"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.517576 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phq4q" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.527348 4931 generic.go:334] "Generic (PLEG): container finished" podID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerID="c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" exitCode=0 Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.527414 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jp5s" event={"ID":"9ac0e0dc-4375-4faf-a262-2cf4e9772a29","Type":"ContainerDied","Data":"c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.527462 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jp5s" event={"ID":"9ac0e0dc-4375-4faf-a262-2cf4e9772a29","Type":"ContainerDied","Data":"827a507dec87e3e9291f3f56b6d8162668e69da1d6e51e16d8c5431ea4ab1518"} Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.527584 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jp5s" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.528930 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-frnwj"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.533773 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-frnwj"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.539521 4931 scope.go:117] "RemoveContainer" containerID="a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.556981 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phq4q"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.558462 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb356dde-8435-471d-a260-8966eeb15eb3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.559897 4931 scope.go:117] "RemoveContainer" containerID="47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.560356 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phq4q"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.579295 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z64mf"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.581653 4931 scope.go:117] "RemoveContainer" containerID="df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.582410 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca\": container with ID starting with df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca not found: ID does not exist" containerID="df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.582581 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca"} err="failed to get container status \"df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca\": rpc error: code = NotFound desc = could not find container \"df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca\": container with ID starting with df0b357181e3eedcf60e2088c82c16833b5fbf8f654133be047c4df773ca01ca not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.582662 4931 scope.go:117] "RemoveContainer" containerID="a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.583383 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8\": container with ID starting with a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8 not found: ID does not exist" containerID="a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.583464 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8"} err="failed to get container status \"a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8\": rpc error: code = NotFound desc = could not find container \"a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8\": container with ID starting with a50f7b6d3ede0641c074c41b51e3f0fffb9f73055f923a85d79534dc71c276d8 not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.583509 4931 scope.go:117] "RemoveContainer" containerID="47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.584057 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b\": container with ID starting with 47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b not found: ID does not exist" containerID="47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.584107 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b"} err="failed to get container status \"47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b\": rpc error: code = NotFound desc = could not find container \"47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b\": container with ID starting with 47721f40853566da291d4128aa27cfd8e9088a579539c3cf063e110cae98ba9b not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.584145 4931 scope.go:117] "RemoveContainer" containerID="6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.584495 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z64mf"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.587740 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jp5s"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.590677 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jp5s"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.593460 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5fcn"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.596489 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5fcn"] Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.601686 4931 scope.go:117] "RemoveContainer" containerID="70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.615086 4931 scope.go:117] "RemoveContainer" containerID="395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.650289 4931 scope.go:117] "RemoveContainer" containerID="6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.651149 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa\": container with ID starting with 6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa not found: ID does not exist" containerID="6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.651230 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa"} err="failed to get container status \"6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa\": rpc error: code = NotFound desc = could not find container \"6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa\": container with ID starting with 6a7c490a1ca576cf6b2bf25d389b7adc58b74fbf8028b2f89a729132152547fa not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.651288 4931 scope.go:117] "RemoveContainer" containerID="70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.651908 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6\": container with ID starting with 70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6 not found: ID does not exist" containerID="70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.651961 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6"} err="failed to get container status \"70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6\": rpc error: code = NotFound desc = could not find container \"70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6\": container with ID starting with 70058c728a740db130f42fbd757863af66006cb2424ed6ee705528054a5e47b6 not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.651998 4931 scope.go:117] "RemoveContainer" containerID="395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.652591 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae\": container with ID starting with 395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae not found: ID does not exist" containerID="395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.652785 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae"} err="failed to get container status \"395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae\": rpc error: code = NotFound desc = could not find container \"395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae\": container with ID starting with 395003f8081bcbed1e60f901d15e3e3273a10ea7d416310fe7a937a87b9ab0ae not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.652959 4931 scope.go:117] "RemoveContainer" containerID="be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.674991 4931 scope.go:117] "RemoveContainer" containerID="6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.696637 4931 scope.go:117] "RemoveContainer" containerID="27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.718902 4931 scope.go:117] "RemoveContainer" containerID="be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.720109 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2\": container with ID starting with be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2 not found: ID does not exist" containerID="be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.720148 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2"} err="failed to get container status \"be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2\": rpc error: code = NotFound desc = could not find container \"be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2\": container with ID starting with be84f384f34a985ca6074056a3f0a86f8302b573afd7c5d671ba2a6cd796e0a2 not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.720187 4931 scope.go:117] "RemoveContainer" containerID="6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.720573 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06\": container with ID starting with 6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06 not found: ID does not exist" containerID="6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.720611 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06"} err="failed to get container status \"6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06\": rpc error: code = NotFound desc = could not find container \"6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06\": container with ID starting with 6c297be527ffd29be1f24337f1122c053fca9ee7947104d32085a2d58afaed06 not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.720632 4931 scope.go:117] "RemoveContainer" containerID="27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.721037 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f\": container with ID starting with 27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f not found: ID does not exist" containerID="27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.721087 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f"} err="failed to get container status \"27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f\": rpc error: code = NotFound desc = could not find container \"27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f\": container with ID starting with 27fb758631cfe852aecd73c3499352e9792cd3594cffa3ea4c324b21b40b055f not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.721119 4931 scope.go:117] "RemoveContainer" containerID="dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.742494 4931 scope.go:117] "RemoveContainer" containerID="dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.744092 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935\": container with ID starting with dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935 not found: ID does not exist" containerID="dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.744161 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935"} err="failed to get container status \"dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935\": rpc error: code = NotFound desc = could not find container \"dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935\": container with ID starting with dffe4917fe91f4a7ea81fe708baf5df13ef743b12684b18852f155d38db20935 not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.744190 4931 scope.go:117] "RemoveContainer" containerID="c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.762927 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.764801 4931 scope.go:117] "RemoveContainer" containerID="709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.786835 4931 scope.go:117] "RemoveContainer" containerID="ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.814079 4931 scope.go:117] "RemoveContainer" containerID="c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.822119 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf\": container with ID starting with c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf not found: ID does not exist" containerID="c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.823117 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf"} err="failed to get container status \"c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf\": rpc error: code = NotFound desc = could not find container \"c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf\": container with ID starting with c58dee2d75112faf5278471fa675a22e31796885008d0a213b4f2e18d60bfcdf not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.823247 4931 scope.go:117] "RemoveContainer" containerID="709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.824167 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd\": container with ID starting with 709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd not found: ID does not exist" containerID="709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.824235 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd"} err="failed to get container status \"709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd\": rpc error: code = NotFound desc = could not find container \"709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd\": container with ID starting with 709b951cf92c6bbce01c8ab7f77c6c23615d324ebd972a2d9fee7993dc0333dd not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.824279 4931 scope.go:117] "RemoveContainer" containerID="ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2" Jan 30 05:12:27 crc kubenswrapper[4931]: E0130 05:12:27.824801 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2\": container with ID starting with ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2 not found: ID does not exist" containerID="ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.824860 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2"} err="failed to get container status \"ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2\": rpc error: code = NotFound desc = could not find container \"ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2\": container with ID starting with ed987c960643926b76f2a7a57feffb55f8defd2210965e18745055f82bde89c2 not found: ID does not exist" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.841599 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.849443 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 05:12:27 crc kubenswrapper[4931]: I0130 05:12:27.869580 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 05:12:28 crc kubenswrapper[4931]: I0130 05:12:28.257514 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 05:12:28 crc kubenswrapper[4931]: I0130 05:12:28.335358 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ng75v"] Jan 30 05:12:28 crc kubenswrapper[4931]: I0130 05:12:28.644739 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 05:12:28 crc kubenswrapper[4931]: I0130 05:12:28.945551 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 05:12:28 crc kubenswrapper[4931]: I0130 05:12:28.992660 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 05:12:29 crc kubenswrapper[4931]: I0130 05:12:29.432634 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" path="/var/lib/kubelet/pods/0dbdc3df-7306-41e4-93c6-d7d27d481789/volumes" Jan 30 05:12:29 crc kubenswrapper[4931]: I0130 05:12:29.433547 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" path="/var/lib/kubelet/pods/9163b44e-4aa5-422c-a2fd-55747c8d506e/volumes" Jan 30 05:12:29 crc kubenswrapper[4931]: I0130 05:12:29.434240 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" path="/var/lib/kubelet/pods/9ac0e0dc-4375-4faf-a262-2cf4e9772a29/volumes" Jan 30 05:12:29 crc kubenswrapper[4931]: I0130 05:12:29.435749 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" path="/var/lib/kubelet/pods/bb356dde-8435-471d-a260-8966eeb15eb3/volumes" Jan 30 05:12:29 crc kubenswrapper[4931]: I0130 05:12:29.436658 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" path="/var/lib/kubelet/pods/bc314d0c-da50-4607-93e1-5bece9c3b2b1/volumes" Jan 30 05:12:29 crc kubenswrapper[4931]: I0130 05:12:29.676803 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.047125 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 05:12:30 crc kubenswrapper[4931]: E0130 05:12:30.124957 4931 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 05:12:30 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-ng75v_openshift-marketplace_29014adb-d772-451f-b4bf-9fdb5d417d1e_0(45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d): error adding pod openshift-marketplace_marketplace-operator-79b997595-ng75v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d" Netns:"/var/run/netns/421ebf21-8cef-416e-9b55-b397c7c51bc2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-ng75v;K8S_POD_INFRA_CONTAINER_ID=45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d;K8S_POD_UID=29014adb-d772-451f-b4bf-9fdb5d417d1e" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-ng75v] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-ng75v/29014adb-d772-451f-b4bf-9fdb5d417d1e]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-ng75v in out of cluster comm: pod "marketplace-operator-79b997595-ng75v" not found Jan 30 05:12:30 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 05:12:30 crc kubenswrapper[4931]: > Jan 30 05:12:30 crc kubenswrapper[4931]: E0130 05:12:30.125088 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 05:12:30 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-ng75v_openshift-marketplace_29014adb-d772-451f-b4bf-9fdb5d417d1e_0(45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d): error adding pod openshift-marketplace_marketplace-operator-79b997595-ng75v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d" Netns:"/var/run/netns/421ebf21-8cef-416e-9b55-b397c7c51bc2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-ng75v;K8S_POD_INFRA_CONTAINER_ID=45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d;K8S_POD_UID=29014adb-d772-451f-b4bf-9fdb5d417d1e" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-ng75v] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-ng75v/29014adb-d772-451f-b4bf-9fdb5d417d1e]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-ng75v in out of cluster comm: pod "marketplace-operator-79b997595-ng75v" not found Jan 30 05:12:30 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 05:12:30 crc kubenswrapper[4931]: > pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:30 crc kubenswrapper[4931]: E0130 05:12:30.125126 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 30 05:12:30 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-ng75v_openshift-marketplace_29014adb-d772-451f-b4bf-9fdb5d417d1e_0(45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d): error adding pod openshift-marketplace_marketplace-operator-79b997595-ng75v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d" Netns:"/var/run/netns/421ebf21-8cef-416e-9b55-b397c7c51bc2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-ng75v;K8S_POD_INFRA_CONTAINER_ID=45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d;K8S_POD_UID=29014adb-d772-451f-b4bf-9fdb5d417d1e" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-ng75v] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-ng75v/29014adb-d772-451f-b4bf-9fdb5d417d1e]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-ng75v in out of cluster comm: pod "marketplace-operator-79b997595-ng75v" not found Jan 30 05:12:30 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 05:12:30 crc kubenswrapper[4931]: > pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:30 crc kubenswrapper[4931]: E0130 05:12:30.125271 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-79b997595-ng75v_openshift-marketplace(29014adb-d772-451f-b4bf-9fdb5d417d1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-79b997595-ng75v_openshift-marketplace(29014adb-d772-451f-b4bf-9fdb5d417d1e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-79b997595-ng75v_openshift-marketplace_29014adb-d772-451f-b4bf-9fdb5d417d1e_0(45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d): error adding pod openshift-marketplace_marketplace-operator-79b997595-ng75v to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d\\\" Netns:\\\"/var/run/netns/421ebf21-8cef-416e-9b55-b397c7c51bc2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-79b997595-ng75v;K8S_POD_INFRA_CONTAINER_ID=45be532072b47c75c02d9d8ceedac9361372f18babde4e28708c3421d7f1b46d;K8S_POD_UID=29014adb-d772-451f-b4bf-9fdb5d417d1e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-79b997595-ng75v] networking: Multus: [openshift-marketplace/marketplace-operator-79b997595-ng75v/29014adb-d772-451f-b4bf-9fdb5d417d1e]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod marketplace-operator-79b997595-ng75v in out of cluster comm: pod \\\"marketplace-operator-79b997595-ng75v\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" podUID="29014adb-d772-451f-b4bf-9fdb5d417d1e" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.300163 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.440992 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.441066 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.554543 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.555832 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.555882 4931 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31" exitCode=137 Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.555968 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.556588 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.556967 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.557115 4931 scope.go:117] "RemoveContainer" containerID="2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.579609 4931 scope.go:117] "RemoveContainer" containerID="2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31" Jan 30 05:12:30 crc kubenswrapper[4931]: E0130 05:12:30.580251 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31\": container with ID starting with 2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31 not found: ID does not exist" containerID="2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.580313 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31"} err="failed to get container status \"2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31\": rpc error: code = NotFound desc = could not find container \"2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31\": container with ID starting with 2d4ef79a860dedfaa63778b9eec997980772fe753dd08276c911199c82d54e31 not found: ID does not exist" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.598817 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.601668 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.610983 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611058 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611157 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611182 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611219 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611305 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611396 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611382 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611477 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611583 4931 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.611600 4931 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.625303 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.713672 4931 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.714349 4931 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.714362 4931 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.737486 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.775391 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ng75v"] Jan 30 05:12:30 crc kubenswrapper[4931]: I0130 05:12:30.795572 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.289618 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.430257 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.431088 4931 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.446313 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.446351 4931 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8bbd6ff1-b870-4c5c-a24c-91b05d22f7bf" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.452896 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.452943 4931 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8bbd6ff1-b870-4c5c-a24c-91b05d22f7bf" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.566274 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" event={"ID":"29014adb-d772-451f-b4bf-9fdb5d417d1e","Type":"ContainerStarted","Data":"4eed271da89e3844d484a968ce5746fa8c0a3cc42efa472504fdfda70ee56b74"} Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.566360 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" event={"ID":"29014adb-d772-451f-b4bf-9fdb5d417d1e","Type":"ContainerStarted","Data":"1c7e8199aad72629af78cacd4130304244471f22d2b45dbb3626a50ad4e91fc9"} Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.567652 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.573505 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" Jan 30 05:12:31 crc kubenswrapper[4931]: I0130 05:12:31.590082 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ng75v" podStartSLOduration=5.590052116 podStartE2EDuration="5.590052116s" podCreationTimestamp="2026-01-30 05:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:12:31.585191516 +0000 UTC m=+286.955101813" watchObservedRunningTime="2026-01-30 05:12:31.590052116 +0000 UTC m=+286.959962373" Jan 30 05:12:32 crc kubenswrapper[4931]: I0130 05:12:32.239980 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 05:12:45 crc kubenswrapper[4931]: I0130 05:12:45.188133 4931 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 05:12:47 crc kubenswrapper[4931]: I0130 05:12:47.624765 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fsn4r"] Jan 30 05:12:47 crc kubenswrapper[4931]: I0130 05:12:47.625315 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" podUID="61a1f22c-baac-4356-9d01-ec2b51700b3a" containerName="controller-manager" containerID="cri-o://5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071" gracePeriod=30 Jan 30 05:12:47 crc kubenswrapper[4931]: I0130 05:12:47.724339 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4"] Jan 30 05:12:47 crc kubenswrapper[4931]: I0130 05:12:47.724676 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" podUID="4fd326f4-63cb-4c1d-bb6c-98118a45f714" containerName="route-controller-manager" containerID="cri-o://8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491" gracePeriod=30 Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.059188 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.134310 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.178886 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-config\") pod \"61a1f22c-baac-4356-9d01-ec2b51700b3a\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.178946 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a1f22c-baac-4356-9d01-ec2b51700b3a-serving-cert\") pod \"61a1f22c-baac-4356-9d01-ec2b51700b3a\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.178986 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-client-ca\") pod \"61a1f22c-baac-4356-9d01-ec2b51700b3a\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.179011 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2kc7\" (UniqueName: \"kubernetes.io/projected/61a1f22c-baac-4356-9d01-ec2b51700b3a-kube-api-access-b2kc7\") pod \"61a1f22c-baac-4356-9d01-ec2b51700b3a\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.179032 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-proxy-ca-bundles\") pod \"61a1f22c-baac-4356-9d01-ec2b51700b3a\" (UID: \"61a1f22c-baac-4356-9d01-ec2b51700b3a\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.179997 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "61a1f22c-baac-4356-9d01-ec2b51700b3a" (UID: "61a1f22c-baac-4356-9d01-ec2b51700b3a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.180062 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "61a1f22c-baac-4356-9d01-ec2b51700b3a" (UID: "61a1f22c-baac-4356-9d01-ec2b51700b3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.180097 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-config" (OuterVolumeSpecName: "config") pod "61a1f22c-baac-4356-9d01-ec2b51700b3a" (UID: "61a1f22c-baac-4356-9d01-ec2b51700b3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.186243 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a1f22c-baac-4356-9d01-ec2b51700b3a-kube-api-access-b2kc7" (OuterVolumeSpecName: "kube-api-access-b2kc7") pod "61a1f22c-baac-4356-9d01-ec2b51700b3a" (UID: "61a1f22c-baac-4356-9d01-ec2b51700b3a"). InnerVolumeSpecName "kube-api-access-b2kc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.187383 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a1f22c-baac-4356-9d01-ec2b51700b3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61a1f22c-baac-4356-9d01-ec2b51700b3a" (UID: "61a1f22c-baac-4356-9d01-ec2b51700b3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.279850 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22qxm\" (UniqueName: \"kubernetes.io/projected/4fd326f4-63cb-4c1d-bb6c-98118a45f714-kube-api-access-22qxm\") pod \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.279984 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-config\") pod \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280070 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-client-ca\") pod \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280142 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd326f4-63cb-4c1d-bb6c-98118a45f714-serving-cert\") pod \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\" (UID: \"4fd326f4-63cb-4c1d-bb6c-98118a45f714\") " Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280621 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280651 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a1f22c-baac-4356-9d01-ec2b51700b3a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280668 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280688 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2kc7\" (UniqueName: \"kubernetes.io/projected/61a1f22c-baac-4356-9d01-ec2b51700b3a-kube-api-access-b2kc7\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280708 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61a1f22c-baac-4356-9d01-ec2b51700b3a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.280953 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-config" (OuterVolumeSpecName: "config") pod "4fd326f4-63cb-4c1d-bb6c-98118a45f714" (UID: "4fd326f4-63cb-4c1d-bb6c-98118a45f714"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.281112 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-client-ca" (OuterVolumeSpecName: "client-ca") pod "4fd326f4-63cb-4c1d-bb6c-98118a45f714" (UID: "4fd326f4-63cb-4c1d-bb6c-98118a45f714"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.283974 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd326f4-63cb-4c1d-bb6c-98118a45f714-kube-api-access-22qxm" (OuterVolumeSpecName: "kube-api-access-22qxm") pod "4fd326f4-63cb-4c1d-bb6c-98118a45f714" (UID: "4fd326f4-63cb-4c1d-bb6c-98118a45f714"). InnerVolumeSpecName "kube-api-access-22qxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.284696 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd326f4-63cb-4c1d-bb6c-98118a45f714-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4fd326f4-63cb-4c1d-bb6c-98118a45f714" (UID: "4fd326f4-63cb-4c1d-bb6c-98118a45f714"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.382447 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22qxm\" (UniqueName: \"kubernetes.io/projected/4fd326f4-63cb-4c1d-bb6c-98118a45f714-kube-api-access-22qxm\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.382504 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.382514 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fd326f4-63cb-4c1d-bb6c-98118a45f714-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.382524 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fd326f4-63cb-4c1d-bb6c-98118a45f714-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.428462 4931 generic.go:334] "Generic (PLEG): container finished" podID="61a1f22c-baac-4356-9d01-ec2b51700b3a" containerID="5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071" exitCode=0 Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.428543 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" event={"ID":"61a1f22c-baac-4356-9d01-ec2b51700b3a","Type":"ContainerDied","Data":"5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071"} Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.428577 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" event={"ID":"61a1f22c-baac-4356-9d01-ec2b51700b3a","Type":"ContainerDied","Data":"e5797b6657e3c9082bc25bca94daca7b60cb46a9c442bf1c2289963ba55e2ade"} Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.428579 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fsn4r" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.428600 4931 scope.go:117] "RemoveContainer" containerID="5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.438765 4931 generic.go:334] "Generic (PLEG): container finished" podID="4fd326f4-63cb-4c1d-bb6c-98118a45f714" containerID="8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491" exitCode=0 Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.438830 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.438833 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" event={"ID":"4fd326f4-63cb-4c1d-bb6c-98118a45f714","Type":"ContainerDied","Data":"8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491"} Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.439050 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4" event={"ID":"4fd326f4-63cb-4c1d-bb6c-98118a45f714","Type":"ContainerDied","Data":"833fcf0086ce5d914597f0c997c10afab54c09e9f589df3d6d360cb20264d686"} Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.469501 4931 scope.go:117] "RemoveContainer" containerID="5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071" Jan 30 05:12:48 crc kubenswrapper[4931]: E0130 05:12:48.470592 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071\": container with ID starting with 5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071 not found: ID does not exist" containerID="5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.470678 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071"} err="failed to get container status \"5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071\": rpc error: code = NotFound desc = could not find container \"5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071\": container with ID starting with 5f95b1aab4f0cd9bb04c69533b5380769db0181d2dc9005aafa2ab56e08f0071 not found: ID does not exist" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.470751 4931 scope.go:117] "RemoveContainer" containerID="8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.478500 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fsn4r"] Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.484076 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fsn4r"] Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.491658 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4"] Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.494627 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5zjn4"] Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.507773 4931 scope.go:117] "RemoveContainer" containerID="8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491" Jan 30 05:12:48 crc kubenswrapper[4931]: E0130 05:12:48.508399 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491\": container with ID starting with 8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491 not found: ID does not exist" containerID="8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491" Jan 30 05:12:48 crc kubenswrapper[4931]: I0130 05:12:48.508498 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491"} err="failed to get container status \"8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491\": rpc error: code = NotFound desc = could not find container \"8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491\": container with ID starting with 8284502c7c6af356e24dd93841b664f4e4ab158665d96da9e6e1f25234da0491 not found: ID does not exist" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.433296 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd326f4-63cb-4c1d-bb6c-98118a45f714" path="/var/lib/kubelet/pods/4fd326f4-63cb-4c1d-bb6c-98118a45f714/volumes" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.434805 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a1f22c-baac-4356-9d01-ec2b51700b3a" path="/var/lib/kubelet/pods/61a1f22c-baac-4356-9d01-ec2b51700b3a/volumes" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.620809 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5"] Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621153 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621167 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621183 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621190 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621199 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621206 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621216 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621225 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621235 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621242 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621256 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621262 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="extract-content" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621270 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" containerName="marketplace-operator" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621276 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" containerName="marketplace-operator" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621287 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621294 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621301 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621307 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621314 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1f22c-baac-4356-9d01-ec2b51700b3a" containerName="controller-manager" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621321 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1f22c-baac-4356-9d01-ec2b51700b3a" containerName="controller-manager" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621328 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621334 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621344 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621352 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621359 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621364 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621372 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd326f4-63cb-4c1d-bb6c-98118a45f714" containerName="route-controller-manager" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621379 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd326f4-63cb-4c1d-bb6c-98118a45f714" containerName="route-controller-manager" Jan 30 05:12:49 crc kubenswrapper[4931]: E0130 05:12:49.621386 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621392 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="extract-utilities" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621538 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9163b44e-4aa5-422c-a2fd-55747c8d506e" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621547 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbdc3df-7306-41e4-93c6-d7d27d481789" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621562 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc314d0c-da50-4607-93e1-5bece9c3b2b1" containerName="marketplace-operator" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621570 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a1f22c-baac-4356-9d01-ec2b51700b3a" containerName="controller-manager" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621579 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd326f4-63cb-4c1d-bb6c-98118a45f714" containerName="route-controller-manager" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621587 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb356dde-8435-471d-a260-8966eeb15eb3" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.621595 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac0e0dc-4375-4faf-a262-2cf4e9772a29" containerName="registry-server" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.622110 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.623335 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-vrtc2"] Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.623745 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.624121 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.625453 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629042 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629201 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629332 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629486 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629712 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629828 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629946 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.629953 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.630234 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.630644 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.638747 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.645165 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-vrtc2"] Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.651500 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5"] Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703163 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24kq\" (UniqueName: \"kubernetes.io/projected/22e7c3b5-dbbe-499e-84b0-b581db2401be-kube-api-access-q24kq\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703214 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx8n5\" (UniqueName: \"kubernetes.io/projected/6252d8d5-c05c-492d-adc0-37e03d1c8999-kube-api-access-zx8n5\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703254 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e7c3b5-dbbe-499e-84b0-b581db2401be-serving-cert\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703279 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-proxy-ca-bundles\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252d8d5-c05c-492d-adc0-37e03d1c8999-config\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703317 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6252d8d5-c05c-492d-adc0-37e03d1c8999-serving-cert\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703348 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-config\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703373 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6252d8d5-c05c-492d-adc0-37e03d1c8999-client-ca\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.703392 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-client-ca\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.804933 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6252d8d5-c05c-492d-adc0-37e03d1c8999-serving-cert\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805011 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-config\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805041 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6252d8d5-c05c-492d-adc0-37e03d1c8999-client-ca\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805060 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-client-ca\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805100 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24kq\" (UniqueName: \"kubernetes.io/projected/22e7c3b5-dbbe-499e-84b0-b581db2401be-kube-api-access-q24kq\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805129 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8n5\" (UniqueName: \"kubernetes.io/projected/6252d8d5-c05c-492d-adc0-37e03d1c8999-kube-api-access-zx8n5\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805156 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e7c3b5-dbbe-499e-84b0-b581db2401be-serving-cert\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805665 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-proxy-ca-bundles\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.805695 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252d8d5-c05c-492d-adc0-37e03d1c8999-config\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.806466 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-client-ca\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.806920 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6252d8d5-c05c-492d-adc0-37e03d1c8999-client-ca\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.807048 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-proxy-ca-bundles\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.807185 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6252d8d5-c05c-492d-adc0-37e03d1c8999-config\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.808618 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-config\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.811269 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e7c3b5-dbbe-499e-84b0-b581db2401be-serving-cert\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.814237 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6252d8d5-c05c-492d-adc0-37e03d1c8999-serving-cert\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.824265 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8n5\" (UniqueName: \"kubernetes.io/projected/6252d8d5-c05c-492d-adc0-37e03d1c8999-kube-api-access-zx8n5\") pod \"route-controller-manager-59fc96bcb9-lbvj5\" (UID: \"6252d8d5-c05c-492d-adc0-37e03d1c8999\") " pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.825468 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24kq\" (UniqueName: \"kubernetes.io/projected/22e7c3b5-dbbe-499e-84b0-b581db2401be-kube-api-access-q24kq\") pod \"controller-manager-796947dbf8-vrtc2\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.950195 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:49 crc kubenswrapper[4931]: I0130 05:12:49.961113 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.207289 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-vrtc2"] Jan 30 05:12:50 crc kubenswrapper[4931]: W0130 05:12:50.219695 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22e7c3b5_dbbe_499e_84b0_b581db2401be.slice/crio-8730f1dad4de33142daffe01ac15a9b43daa10f63aaa49338319e0fede7681d4 WatchSource:0}: Error finding container 8730f1dad4de33142daffe01ac15a9b43daa10f63aaa49338319e0fede7681d4: Status 404 returned error can't find the container with id 8730f1dad4de33142daffe01ac15a9b43daa10f63aaa49338319e0fede7681d4 Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.241972 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5"] Jan 30 05:12:50 crc kubenswrapper[4931]: W0130 05:12:50.286443 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6252d8d5_c05c_492d_adc0_37e03d1c8999.slice/crio-e1ee5ddcd91c92063356ce95f477542c9860273020ffe09123142143907a303e WatchSource:0}: Error finding container e1ee5ddcd91c92063356ce95f477542c9860273020ffe09123142143907a303e: Status 404 returned error can't find the container with id e1ee5ddcd91c92063356ce95f477542c9860273020ffe09123142143907a303e Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.457264 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" event={"ID":"22e7c3b5-dbbe-499e-84b0-b581db2401be","Type":"ContainerStarted","Data":"334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051"} Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.457883 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" event={"ID":"22e7c3b5-dbbe-499e-84b0-b581db2401be","Type":"ContainerStarted","Data":"8730f1dad4de33142daffe01ac15a9b43daa10f63aaa49338319e0fede7681d4"} Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.457915 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.460185 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" event={"ID":"6252d8d5-c05c-492d-adc0-37e03d1c8999","Type":"ContainerStarted","Data":"be7c814ba8fd871965485b4f6cee86d7878e1312bde78e615d3cbcc32e172174"} Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.460239 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" event={"ID":"6252d8d5-c05c-492d-adc0-37e03d1c8999","Type":"ContainerStarted","Data":"e1ee5ddcd91c92063356ce95f477542c9860273020ffe09123142143907a303e"} Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.460438 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.461707 4931 patch_prober.go:28] interesting pod/route-controller-manager-59fc96bcb9-lbvj5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.461860 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" podUID="6252d8d5-c05c-492d-adc0-37e03d1c8999" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.467495 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.493655 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" podStartSLOduration=3.493631136 podStartE2EDuration="3.493631136s" podCreationTimestamp="2026-01-30 05:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:12:50.491734951 +0000 UTC m=+305.861645208" watchObservedRunningTime="2026-01-30 05:12:50.493631136 +0000 UTC m=+305.863541393" Jan 30 05:12:50 crc kubenswrapper[4931]: I0130 05:12:50.564998 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" podStartSLOduration=3.564974698 podStartE2EDuration="3.564974698s" podCreationTimestamp="2026-01-30 05:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:12:50.560463008 +0000 UTC m=+305.930373285" watchObservedRunningTime="2026-01-30 05:12:50.564974698 +0000 UTC m=+305.934884965" Jan 30 05:12:51 crc kubenswrapper[4931]: I0130 05:12:51.470826 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59fc96bcb9-lbvj5" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.708964 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-629s4"] Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.711086 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.722600 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-629s4"] Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.884512 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f5aa81d9-e89e-4958-b823-73da6250ba31-registry-certificates\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.884707 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f5aa81d9-e89e-4958-b823-73da6250ba31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.884752 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-bound-sa-token\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.884797 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn6rg\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-kube-api-access-mn6rg\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.884939 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-registry-tls\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.885038 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5aa81d9-e89e-4958-b823-73da6250ba31-trusted-ca\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.885063 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f5aa81d9-e89e-4958-b823-73da6250ba31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.885198 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.908029 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.987018 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f5aa81d9-e89e-4958-b823-73da6250ba31-registry-certificates\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.987178 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f5aa81d9-e89e-4958-b823-73da6250ba31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.987226 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-bound-sa-token\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.987275 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn6rg\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-kube-api-access-mn6rg\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.987331 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-registry-tls\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.987380 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5aa81d9-e89e-4958-b823-73da6250ba31-trusted-ca\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.987838 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f5aa81d9-e89e-4958-b823-73da6250ba31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.990410 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f5aa81d9-e89e-4958-b823-73da6250ba31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.988404 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f5aa81d9-e89e-4958-b823-73da6250ba31-registry-certificates\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.990322 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5aa81d9-e89e-4958-b823-73da6250ba31-trusted-ca\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.995467 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-registry-tls\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:02 crc kubenswrapper[4931]: I0130 05:13:02.995591 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f5aa81d9-e89e-4958-b823-73da6250ba31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:03 crc kubenswrapper[4931]: I0130 05:13:03.008692 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-bound-sa-token\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:03 crc kubenswrapper[4931]: I0130 05:13:03.009301 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn6rg\" (UniqueName: \"kubernetes.io/projected/f5aa81d9-e89e-4958-b823-73da6250ba31-kube-api-access-mn6rg\") pod \"image-registry-66df7c8f76-629s4\" (UID: \"f5aa81d9-e89e-4958-b823-73da6250ba31\") " pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:03 crc kubenswrapper[4931]: I0130 05:13:03.050077 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:03 crc kubenswrapper[4931]: I0130 05:13:03.546400 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-629s4"] Jan 30 05:13:03 crc kubenswrapper[4931]: W0130 05:13:03.553996 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5aa81d9_e89e_4958_b823_73da6250ba31.slice/crio-d349d35cc57e9d1fc28c678fba86d94ef517778f251e24bc650cd8a859591d76 WatchSource:0}: Error finding container d349d35cc57e9d1fc28c678fba86d94ef517778f251e24bc650cd8a859591d76: Status 404 returned error can't find the container with id d349d35cc57e9d1fc28c678fba86d94ef517778f251e24bc650cd8a859591d76 Jan 30 05:13:04 crc kubenswrapper[4931]: I0130 05:13:04.564706 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-629s4" event={"ID":"f5aa81d9-e89e-4958-b823-73da6250ba31","Type":"ContainerStarted","Data":"cbd93577eba7683d1f1840cf871ba2bfc25b76cfa0a06797054e30eed27e259a"} Jan 30 05:13:04 crc kubenswrapper[4931]: I0130 05:13:04.565205 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:04 crc kubenswrapper[4931]: I0130 05:13:04.565224 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-629s4" event={"ID":"f5aa81d9-e89e-4958-b823-73da6250ba31","Type":"ContainerStarted","Data":"d349d35cc57e9d1fc28c678fba86d94ef517778f251e24bc650cd8a859591d76"} Jan 30 05:13:04 crc kubenswrapper[4931]: I0130 05:13:04.613615 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-629s4" podStartSLOduration=2.613595025 podStartE2EDuration="2.613595025s" podCreationTimestamp="2026-01-30 05:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:13:04.608208769 +0000 UTC m=+319.978119036" watchObservedRunningTime="2026-01-30 05:13:04.613595025 +0000 UTC m=+319.983505282" Jan 30 05:13:23 crc kubenswrapper[4931]: I0130 05:13:23.057650 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-629s4" Jan 30 05:13:23 crc kubenswrapper[4931]: I0130 05:13:23.192840 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f8zg7"] Jan 30 05:13:27 crc kubenswrapper[4931]: I0130 05:13:27.363591 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:13:27 crc kubenswrapper[4931]: I0130 05:13:27.365750 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:13:27 crc kubenswrapper[4931]: I0130 05:13:27.651654 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-vrtc2"] Jan 30 05:13:27 crc kubenswrapper[4931]: I0130 05:13:27.652681 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" podUID="22e7c3b5-dbbe-499e-84b0-b581db2401be" containerName="controller-manager" containerID="cri-o://334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051" gracePeriod=30 Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.063298 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.170281 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-client-ca\") pod \"22e7c3b5-dbbe-499e-84b0-b581db2401be\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.170370 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-config\") pod \"22e7c3b5-dbbe-499e-84b0-b581db2401be\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.170479 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-proxy-ca-bundles\") pod \"22e7c3b5-dbbe-499e-84b0-b581db2401be\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.170536 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q24kq\" (UniqueName: \"kubernetes.io/projected/22e7c3b5-dbbe-499e-84b0-b581db2401be-kube-api-access-q24kq\") pod \"22e7c3b5-dbbe-499e-84b0-b581db2401be\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.170571 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e7c3b5-dbbe-499e-84b0-b581db2401be-serving-cert\") pod \"22e7c3b5-dbbe-499e-84b0-b581db2401be\" (UID: \"22e7c3b5-dbbe-499e-84b0-b581db2401be\") " Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.172500 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-client-ca" (OuterVolumeSpecName: "client-ca") pod "22e7c3b5-dbbe-499e-84b0-b581db2401be" (UID: "22e7c3b5-dbbe-499e-84b0-b581db2401be"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.172777 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "22e7c3b5-dbbe-499e-84b0-b581db2401be" (UID: "22e7c3b5-dbbe-499e-84b0-b581db2401be"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.173743 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-config" (OuterVolumeSpecName: "config") pod "22e7c3b5-dbbe-499e-84b0-b581db2401be" (UID: "22e7c3b5-dbbe-499e-84b0-b581db2401be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.180930 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22e7c3b5-dbbe-499e-84b0-b581db2401be-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "22e7c3b5-dbbe-499e-84b0-b581db2401be" (UID: "22e7c3b5-dbbe-499e-84b0-b581db2401be"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.180971 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e7c3b5-dbbe-499e-84b0-b581db2401be-kube-api-access-q24kq" (OuterVolumeSpecName: "kube-api-access-q24kq") pod "22e7c3b5-dbbe-499e-84b0-b581db2401be" (UID: "22e7c3b5-dbbe-499e-84b0-b581db2401be"). InnerVolumeSpecName "kube-api-access-q24kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.272700 4931 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.272820 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q24kq\" (UniqueName: \"kubernetes.io/projected/22e7c3b5-dbbe-499e-84b0-b581db2401be-kube-api-access-q24kq\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.272851 4931 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22e7c3b5-dbbe-499e-84b0-b581db2401be-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.272869 4931 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.272887 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7c3b5-dbbe-499e-84b0-b581db2401be-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.748084 4931 generic.go:334] "Generic (PLEG): container finished" podID="22e7c3b5-dbbe-499e-84b0-b581db2401be" containerID="334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051" exitCode=0 Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.748154 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" event={"ID":"22e7c3b5-dbbe-499e-84b0-b581db2401be","Type":"ContainerDied","Data":"334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051"} Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.748188 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.748215 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796947dbf8-vrtc2" event={"ID":"22e7c3b5-dbbe-499e-84b0-b581db2401be","Type":"ContainerDied","Data":"8730f1dad4de33142daffe01ac15a9b43daa10f63aaa49338319e0fede7681d4"} Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.748249 4931 scope.go:117] "RemoveContainer" containerID="334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.777540 4931 scope.go:117] "RemoveContainer" containerID="334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051" Jan 30 05:13:28 crc kubenswrapper[4931]: E0130 05:13:28.778205 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051\": container with ID starting with 334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051 not found: ID does not exist" containerID="334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.778300 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051"} err="failed to get container status \"334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051\": rpc error: code = NotFound desc = could not find container \"334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051\": container with ID starting with 334ca40b7e5247624f4b661300f84d8d55aa7741bb4085f58747cd976544f051 not found: ID does not exist" Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.809024 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-vrtc2"] Jan 30 05:13:28 crc kubenswrapper[4931]: I0130 05:13:28.815100 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-796947dbf8-vrtc2"] Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.435516 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e7c3b5-dbbe-499e-84b0-b581db2401be" path="/var/lib/kubelet/pods/22e7c3b5-dbbe-499e-84b0-b581db2401be/volumes" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.679065 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67db4dc676-9s7v8"] Jan 30 05:13:29 crc kubenswrapper[4931]: E0130 05:13:29.679923 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e7c3b5-dbbe-499e-84b0-b581db2401be" containerName="controller-manager" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.679948 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e7c3b5-dbbe-499e-84b0-b581db2401be" containerName="controller-manager" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.680150 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e7c3b5-dbbe-499e-84b0-b581db2401be" containerName="controller-manager" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.680760 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.685686 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.686037 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.686302 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.686730 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.687017 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.695326 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.699104 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.715756 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db4dc676-9s7v8"] Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.799077 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-client-ca\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.799467 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde9b145-fcfe-4d25-81bf-9eeb73805640-serving-cert\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.799562 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-proxy-ca-bundles\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.799701 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-config\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.799819 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s22n2\" (UniqueName: \"kubernetes.io/projected/dde9b145-fcfe-4d25-81bf-9eeb73805640-kube-api-access-s22n2\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.901883 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-config\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.902142 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s22n2\" (UniqueName: \"kubernetes.io/projected/dde9b145-fcfe-4d25-81bf-9eeb73805640-kube-api-access-s22n2\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.902201 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-client-ca\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.902260 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde9b145-fcfe-4d25-81bf-9eeb73805640-serving-cert\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.903352 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-client-ca\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.903816 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-proxy-ca-bundles\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.904701 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-proxy-ca-bundles\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.905459 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde9b145-fcfe-4d25-81bf-9eeb73805640-config\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.916316 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dde9b145-fcfe-4d25-81bf-9eeb73805640-serving-cert\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:29 crc kubenswrapper[4931]: I0130 05:13:29.930397 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s22n2\" (UniqueName: \"kubernetes.io/projected/dde9b145-fcfe-4d25-81bf-9eeb73805640-kube-api-access-s22n2\") pod \"controller-manager-67db4dc676-9s7v8\" (UID: \"dde9b145-fcfe-4d25-81bf-9eeb73805640\") " pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:30 crc kubenswrapper[4931]: I0130 05:13:30.012659 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:30 crc kubenswrapper[4931]: I0130 05:13:30.260329 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db4dc676-9s7v8"] Jan 30 05:13:30 crc kubenswrapper[4931]: I0130 05:13:30.768025 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" event={"ID":"dde9b145-fcfe-4d25-81bf-9eeb73805640","Type":"ContainerStarted","Data":"79ad1673422cd9fe737fc31029457d07d4dc0b75ffec4f181f02b42877da04cf"} Jan 30 05:13:30 crc kubenswrapper[4931]: I0130 05:13:30.768113 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" event={"ID":"dde9b145-fcfe-4d25-81bf-9eeb73805640","Type":"ContainerStarted","Data":"c4ac253b910233a2b86b7a54b421450094e8ed2c4080cf539e6c53fd41b4df35"} Jan 30 05:13:30 crc kubenswrapper[4931]: I0130 05:13:30.803586 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" podStartSLOduration=3.803564286 podStartE2EDuration="3.803564286s" podCreationTimestamp="2026-01-30 05:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:13:30.800098909 +0000 UTC m=+346.170009166" watchObservedRunningTime="2026-01-30 05:13:30.803564286 +0000 UTC m=+346.173474543" Jan 30 05:13:31 crc kubenswrapper[4931]: I0130 05:13:31.782185 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:31 crc kubenswrapper[4931]: I0130 05:13:31.791064 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67db4dc676-9s7v8" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.153815 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kf2zk"] Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.161641 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.163517 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf2zk"] Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.165181 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.279709 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kl88\" (UniqueName: \"kubernetes.io/projected/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-kube-api-access-8kl88\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.279796 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-utilities\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.279923 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-catalog-content\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.343748 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wn8rd"] Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.346198 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.349172 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.358644 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wn8rd"] Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.381534 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-catalog-content\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.381615 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kl88\" (UniqueName: \"kubernetes.io/projected/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-kube-api-access-8kl88\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.381657 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-utilities\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.382164 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-catalog-content\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.382212 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-utilities\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.435106 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kl88\" (UniqueName: \"kubernetes.io/projected/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-kube-api-access-8kl88\") pod \"community-operators-kf2zk\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.482826 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-catalog-content\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.482996 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2896n\" (UniqueName: \"kubernetes.io/projected/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-kube-api-access-2896n\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.483140 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-utilities\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.498047 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.584262 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-catalog-content\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.584863 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2896n\" (UniqueName: \"kubernetes.io/projected/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-kube-api-access-2896n\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.584910 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-catalog-content\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.584956 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-utilities\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.585544 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-utilities\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.610364 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2896n\" (UniqueName: \"kubernetes.io/projected/f88493be-1e8e-47b8-9ac7-d035ba0b6e36-kube-api-access-2896n\") pod \"certified-operators-wn8rd\" (UID: \"f88493be-1e8e-47b8-9ac7-d035ba0b6e36\") " pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.714476 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:44 crc kubenswrapper[4931]: I0130 05:13:44.999032 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kf2zk"] Jan 30 05:13:45 crc kubenswrapper[4931]: W0130 05:13:45.004783 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e7fc26b_b0a0_4ed3_973a_d14f3118f495.slice/crio-1886542e78e49a17a6d2a06541b8bb125f47e58db49595a42b19bb6b2b8126f7 WatchSource:0}: Error finding container 1886542e78e49a17a6d2a06541b8bb125f47e58db49595a42b19bb6b2b8126f7: Status 404 returned error can't find the container with id 1886542e78e49a17a6d2a06541b8bb125f47e58db49595a42b19bb6b2b8126f7 Jan 30 05:13:45 crc kubenswrapper[4931]: I0130 05:13:45.194928 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wn8rd"] Jan 30 05:13:45 crc kubenswrapper[4931]: W0130 05:13:45.205311 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf88493be_1e8e_47b8_9ac7_d035ba0b6e36.slice/crio-b6af702879f7d9cff5a84f904295f2aa1e0f5d5d5643273bfb3341cada78a420 WatchSource:0}: Error finding container b6af702879f7d9cff5a84f904295f2aa1e0f5d5d5643273bfb3341cada78a420: Status 404 returned error can't find the container with id b6af702879f7d9cff5a84f904295f2aa1e0f5d5d5643273bfb3341cada78a420 Jan 30 05:13:45 crc kubenswrapper[4931]: I0130 05:13:45.899745 4931 generic.go:334] "Generic (PLEG): container finished" podID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerID="354aad0cad4b5a2844a0aaa97a5d9c4e75d0d2f7996caccea5b63021c15588c0" exitCode=0 Jan 30 05:13:45 crc kubenswrapper[4931]: I0130 05:13:45.899852 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2zk" event={"ID":"4e7fc26b-b0a0-4ed3-973a-d14f3118f495","Type":"ContainerDied","Data":"354aad0cad4b5a2844a0aaa97a5d9c4e75d0d2f7996caccea5b63021c15588c0"} Jan 30 05:13:45 crc kubenswrapper[4931]: I0130 05:13:45.899894 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2zk" event={"ID":"4e7fc26b-b0a0-4ed3-973a-d14f3118f495","Type":"ContainerStarted","Data":"1886542e78e49a17a6d2a06541b8bb125f47e58db49595a42b19bb6b2b8126f7"} Jan 30 05:13:45 crc kubenswrapper[4931]: I0130 05:13:45.901911 4931 generic.go:334] "Generic (PLEG): container finished" podID="f88493be-1e8e-47b8-9ac7-d035ba0b6e36" containerID="45c24f494b946d56f859342104d13bbfa98146933c4ff15c61aeb6bdc04ed7e5" exitCode=0 Jan 30 05:13:45 crc kubenswrapper[4931]: I0130 05:13:45.901960 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn8rd" event={"ID":"f88493be-1e8e-47b8-9ac7-d035ba0b6e36","Type":"ContainerDied","Data":"45c24f494b946d56f859342104d13bbfa98146933c4ff15c61aeb6bdc04ed7e5"} Jan 30 05:13:45 crc kubenswrapper[4931]: I0130 05:13:45.901991 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn8rd" event={"ID":"f88493be-1e8e-47b8-9ac7-d035ba0b6e36","Type":"ContainerStarted","Data":"b6af702879f7d9cff5a84f904295f2aa1e0f5d5d5643273bfb3341cada78a420"} Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.549171 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w6b74"] Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.551944 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.561931 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.563523 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6b74"] Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.718868 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aacb80d-976e-4059-9c84-857aab618f4e-catalog-content\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.719510 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkrdj\" (UniqueName: \"kubernetes.io/projected/5aacb80d-976e-4059-9c84-857aab618f4e-kube-api-access-dkrdj\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.719568 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aacb80d-976e-4059-9c84-857aab618f4e-utilities\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.750280 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kg222"] Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.753342 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.759049 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.765630 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kg222"] Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.821040 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aacb80d-976e-4059-9c84-857aab618f4e-catalog-content\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.821123 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkrdj\" (UniqueName: \"kubernetes.io/projected/5aacb80d-976e-4059-9c84-857aab618f4e-kube-api-access-dkrdj\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.821177 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aacb80d-976e-4059-9c84-857aab618f4e-utilities\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.822839 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aacb80d-976e-4059-9c84-857aab618f4e-utilities\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.822878 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aacb80d-976e-4059-9c84-857aab618f4e-catalog-content\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.867743 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkrdj\" (UniqueName: \"kubernetes.io/projected/5aacb80d-976e-4059-9c84-857aab618f4e-kube-api-access-dkrdj\") pod \"redhat-marketplace-w6b74\" (UID: \"5aacb80d-976e-4059-9c84-857aab618f4e\") " pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.892898 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.915317 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2zk" event={"ID":"4e7fc26b-b0a0-4ed3-973a-d14f3118f495","Type":"ContainerStarted","Data":"3075fb652ae28ffe627ee2fb3e561168de8141b7ab9f92e2f6fdc3f70ab564db"} Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.918874 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn8rd" event={"ID":"f88493be-1e8e-47b8-9ac7-d035ba0b6e36","Type":"ContainerStarted","Data":"27fd229db93cfe74643d84e10ec520a9d77f0a3857d8b1d7dc0212a63749cb0c"} Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.923737 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cxzw\" (UniqueName: \"kubernetes.io/projected/4c0c107d-a03c-479f-b127-2824affd9b35-kube-api-access-9cxzw\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.925204 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0c107d-a03c-479f-b127-2824affd9b35-catalog-content\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:46 crc kubenswrapper[4931]: I0130 05:13:46.926709 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0c107d-a03c-479f-b127-2824affd9b35-utilities\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.028023 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cxzw\" (UniqueName: \"kubernetes.io/projected/4c0c107d-a03c-479f-b127-2824affd9b35-kube-api-access-9cxzw\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.028461 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0c107d-a03c-479f-b127-2824affd9b35-catalog-content\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.029077 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0c107d-a03c-479f-b127-2824affd9b35-catalog-content\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.029211 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0c107d-a03c-479f-b127-2824affd9b35-utilities\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.029533 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0c107d-a03c-479f-b127-2824affd9b35-utilities\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.051869 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cxzw\" (UniqueName: \"kubernetes.io/projected/4c0c107d-a03c-479f-b127-2824affd9b35-kube-api-access-9cxzw\") pod \"redhat-operators-kg222\" (UID: \"4c0c107d-a03c-479f-b127-2824affd9b35\") " pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.108548 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.334843 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6b74"] Jan 30 05:13:47 crc kubenswrapper[4931]: W0130 05:13:47.340036 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aacb80d_976e_4059_9c84_857aab618f4e.slice/crio-6ea9b713e2411eb4cf5ff9e021770cc75e6120c762332593b8e9c9fef28ef132 WatchSource:0}: Error finding container 6ea9b713e2411eb4cf5ff9e021770cc75e6120c762332593b8e9c9fef28ef132: Status 404 returned error can't find the container with id 6ea9b713e2411eb4cf5ff9e021770cc75e6120c762332593b8e9c9fef28ef132 Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.557199 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kg222"] Jan 30 05:13:47 crc kubenswrapper[4931]: W0130 05:13:47.632535 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c0c107d_a03c_479f_b127_2824affd9b35.slice/crio-b4419f302b91b29cf39b49013976953759032f4b6207ce1e5990915f2ffd1bd4 WatchSource:0}: Error finding container b4419f302b91b29cf39b49013976953759032f4b6207ce1e5990915f2ffd1bd4: Status 404 returned error can't find the container with id b4419f302b91b29cf39b49013976953759032f4b6207ce1e5990915f2ffd1bd4 Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.930088 4931 generic.go:334] "Generic (PLEG): container finished" podID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerID="3075fb652ae28ffe627ee2fb3e561168de8141b7ab9f92e2f6fdc3f70ab564db" exitCode=0 Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.930212 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2zk" event={"ID":"4e7fc26b-b0a0-4ed3-973a-d14f3118f495","Type":"ContainerDied","Data":"3075fb652ae28ffe627ee2fb3e561168de8141b7ab9f92e2f6fdc3f70ab564db"} Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.933535 4931 generic.go:334] "Generic (PLEG): container finished" podID="4c0c107d-a03c-479f-b127-2824affd9b35" containerID="6186d49243afe7d666fe2ccf6dce075fb9428c4fc6045951ddee6f7395f960d2" exitCode=0 Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.933617 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg222" event={"ID":"4c0c107d-a03c-479f-b127-2824affd9b35","Type":"ContainerDied","Data":"6186d49243afe7d666fe2ccf6dce075fb9428c4fc6045951ddee6f7395f960d2"} Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.933661 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg222" event={"ID":"4c0c107d-a03c-479f-b127-2824affd9b35","Type":"ContainerStarted","Data":"b4419f302b91b29cf39b49013976953759032f4b6207ce1e5990915f2ffd1bd4"} Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.944957 4931 generic.go:334] "Generic (PLEG): container finished" podID="f88493be-1e8e-47b8-9ac7-d035ba0b6e36" containerID="27fd229db93cfe74643d84e10ec520a9d77f0a3857d8b1d7dc0212a63749cb0c" exitCode=0 Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.945050 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn8rd" event={"ID":"f88493be-1e8e-47b8-9ac7-d035ba0b6e36","Type":"ContainerDied","Data":"27fd229db93cfe74643d84e10ec520a9d77f0a3857d8b1d7dc0212a63749cb0c"} Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.951484 4931 generic.go:334] "Generic (PLEG): container finished" podID="5aacb80d-976e-4059-9c84-857aab618f4e" containerID="6b360c827ca6ade30a85377101c3662f34c4012c21bad96a60cbcf8efc2c37fb" exitCode=0 Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.951538 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6b74" event={"ID":"5aacb80d-976e-4059-9c84-857aab618f4e","Type":"ContainerDied","Data":"6b360c827ca6ade30a85377101c3662f34c4012c21bad96a60cbcf8efc2c37fb"} Jan 30 05:13:47 crc kubenswrapper[4931]: I0130 05:13:47.951594 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6b74" event={"ID":"5aacb80d-976e-4059-9c84-857aab618f4e","Type":"ContainerStarted","Data":"6ea9b713e2411eb4cf5ff9e021770cc75e6120c762332593b8e9c9fef28ef132"} Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.249495 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" podUID="32e4a367-9945-4fdb-b5bc-4c8d35512264" containerName="registry" containerID="cri-o://a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9" gracePeriod=30 Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.844110 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.955540 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32e4a367-9945-4fdb-b5bc-4c8d35512264-ca-trust-extracted\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.955596 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-tls\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.955635 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-certificates\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.955703 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nltg\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-kube-api-access-6nltg\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.955810 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.955831 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32e4a367-9945-4fdb-b5bc-4c8d35512264-installation-pull-secrets\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.955854 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-trusted-ca\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.956757 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.956864 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-bound-sa-token\") pod \"32e4a367-9945-4fdb-b5bc-4c8d35512264\" (UID: \"32e4a367-9945-4fdb-b5bc-4c8d35512264\") " Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.957016 4931 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.962709 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-kube-api-access-6nltg" (OuterVolumeSpecName: "kube-api-access-6nltg") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "kube-api-access-6nltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.963039 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.964522 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e4a367-9945-4fdb-b5bc-4c8d35512264-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.964822 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.968948 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.969388 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg222" event={"ID":"4c0c107d-a03c-479f-b127-2824affd9b35","Type":"ContainerStarted","Data":"165b5c448d74db26e17321b06cbf97fa28ab34612e8d88e1196f010e5fec0247"} Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.973888 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.987155 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wn8rd" event={"ID":"f88493be-1e8e-47b8-9ac7-d035ba0b6e36","Type":"ContainerStarted","Data":"07276f5318ed5db8e2469700b70de6a9f88c4112e661c5f07bcfe2b44242a25c"} Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.989749 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32e4a367-9945-4fdb-b5bc-4c8d35512264-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "32e4a367-9945-4fdb-b5bc-4c8d35512264" (UID: "32e4a367-9945-4fdb-b5bc-4c8d35512264"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.993753 4931 generic.go:334] "Generic (PLEG): container finished" podID="32e4a367-9945-4fdb-b5bc-4c8d35512264" containerID="a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9" exitCode=0 Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.993888 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.994730 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" event={"ID":"32e4a367-9945-4fdb-b5bc-4c8d35512264","Type":"ContainerDied","Data":"a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9"} Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.994794 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f8zg7" event={"ID":"32e4a367-9945-4fdb-b5bc-4c8d35512264","Type":"ContainerDied","Data":"79ebc9473f22f72df11aa297cb419ebdd7c57ca36caf670a91a0d056621b7c54"} Jan 30 05:13:48 crc kubenswrapper[4931]: I0130 05:13:48.994816 4931 scope.go:117] "RemoveContainer" containerID="a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.003363 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2zk" event={"ID":"4e7fc26b-b0a0-4ed3-973a-d14f3118f495","Type":"ContainerStarted","Data":"f2435e1bbad2c342892feef9048c365b9106973a68f19a101621695990d1928d"} Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.038665 4931 scope.go:117] "RemoveContainer" containerID="a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.050118 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wn8rd" podStartSLOduration=2.601924856 podStartE2EDuration="5.049810353s" podCreationTimestamp="2026-01-30 05:13:44 +0000 UTC" firstStartedPulling="2026-01-30 05:13:45.905306821 +0000 UTC m=+361.275217098" lastFinishedPulling="2026-01-30 05:13:48.353192298 +0000 UTC m=+363.723102595" observedRunningTime="2026-01-30 05:13:49.029712047 +0000 UTC m=+364.399622334" watchObservedRunningTime="2026-01-30 05:13:49.049810353 +0000 UTC m=+364.419720630" Jan 30 05:13:49 crc kubenswrapper[4931]: E0130 05:13:49.055931 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9\": container with ID starting with a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9 not found: ID does not exist" containerID="a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.056004 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9"} err="failed to get container status \"a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9\": rpc error: code = NotFound desc = could not find container \"a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9\": container with ID starting with a4b4ba0915f07fcf004390096fa0954885a608b2a848ca36a182ab0388cbfba9 not found: ID does not exist" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.058706 4931 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/32e4a367-9945-4fdb-b5bc-4c8d35512264-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.059130 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e4a367-9945-4fdb-b5bc-4c8d35512264-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.059365 4931 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.059570 4931 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/32e4a367-9945-4fdb-b5bc-4c8d35512264-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.059760 4931 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.059967 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nltg\" (UniqueName: \"kubernetes.io/projected/32e4a367-9945-4fdb-b5bc-4c8d35512264-kube-api-access-6nltg\") on node \"crc\" DevicePath \"\"" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.084138 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f8zg7"] Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.088052 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f8zg7"] Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.088785 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kf2zk" podStartSLOduration=2.586018518 podStartE2EDuration="5.08876755s" podCreationTimestamp="2026-01-30 05:13:44 +0000 UTC" firstStartedPulling="2026-01-30 05:13:45.906647088 +0000 UTC m=+361.276557355" lastFinishedPulling="2026-01-30 05:13:48.40939612 +0000 UTC m=+363.779306387" observedRunningTime="2026-01-30 05:13:49.079813038 +0000 UTC m=+364.449723295" watchObservedRunningTime="2026-01-30 05:13:49.08876755 +0000 UTC m=+364.458677807" Jan 30 05:13:49 crc kubenswrapper[4931]: I0130 05:13:49.429803 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e4a367-9945-4fdb-b5bc-4c8d35512264" path="/var/lib/kubelet/pods/32e4a367-9945-4fdb-b5bc-4c8d35512264/volumes" Jan 30 05:13:50 crc kubenswrapper[4931]: I0130 05:13:50.012615 4931 generic.go:334] "Generic (PLEG): container finished" podID="4c0c107d-a03c-479f-b127-2824affd9b35" containerID="165b5c448d74db26e17321b06cbf97fa28ab34612e8d88e1196f010e5fec0247" exitCode=0 Jan 30 05:13:50 crc kubenswrapper[4931]: I0130 05:13:50.012742 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg222" event={"ID":"4c0c107d-a03c-479f-b127-2824affd9b35","Type":"ContainerDied","Data":"165b5c448d74db26e17321b06cbf97fa28ab34612e8d88e1196f010e5fec0247"} Jan 30 05:13:50 crc kubenswrapper[4931]: I0130 05:13:50.015871 4931 generic.go:334] "Generic (PLEG): container finished" podID="5aacb80d-976e-4059-9c84-857aab618f4e" containerID="9e41ae65676a2a4218fe39ac6aa91019b0cff14cc71a71f96119ceb918456d3a" exitCode=0 Jan 30 05:13:50 crc kubenswrapper[4931]: I0130 05:13:50.016294 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6b74" event={"ID":"5aacb80d-976e-4059-9c84-857aab618f4e","Type":"ContainerDied","Data":"9e41ae65676a2a4218fe39ac6aa91019b0cff14cc71a71f96119ceb918456d3a"} Jan 30 05:13:51 crc kubenswrapper[4931]: I0130 05:13:51.026619 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kg222" event={"ID":"4c0c107d-a03c-479f-b127-2824affd9b35","Type":"ContainerStarted","Data":"01338e299128fcc6de41c64c196d979b4e4f88cb88b53aa2c7878ac629faa42f"} Jan 30 05:13:51 crc kubenswrapper[4931]: I0130 05:13:51.033201 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6b74" event={"ID":"5aacb80d-976e-4059-9c84-857aab618f4e","Type":"ContainerStarted","Data":"8762433b8429fe35611223268c6addbe95078f2256b8feee27b88c7c4a2321ad"} Jan 30 05:13:51 crc kubenswrapper[4931]: I0130 05:13:51.062122 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kg222" podStartSLOduration=2.522917738 podStartE2EDuration="5.062098906s" podCreationTimestamp="2026-01-30 05:13:46 +0000 UTC" firstStartedPulling="2026-01-30 05:13:47.936162876 +0000 UTC m=+363.306073133" lastFinishedPulling="2026-01-30 05:13:50.475344044 +0000 UTC m=+365.845254301" observedRunningTime="2026-01-30 05:13:51.054966505 +0000 UTC m=+366.424876802" watchObservedRunningTime="2026-01-30 05:13:51.062098906 +0000 UTC m=+366.432009193" Jan 30 05:13:51 crc kubenswrapper[4931]: I0130 05:13:51.087642 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w6b74" podStartSLOduration=2.585274394 podStartE2EDuration="5.087624215s" podCreationTimestamp="2026-01-30 05:13:46 +0000 UTC" firstStartedPulling="2026-01-30 05:13:47.953788622 +0000 UTC m=+363.323698889" lastFinishedPulling="2026-01-30 05:13:50.456138463 +0000 UTC m=+365.826048710" observedRunningTime="2026-01-30 05:13:51.081999976 +0000 UTC m=+366.451910273" watchObservedRunningTime="2026-01-30 05:13:51.087624215 +0000 UTC m=+366.457534472" Jan 30 05:13:54 crc kubenswrapper[4931]: I0130 05:13:54.498573 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:54 crc kubenswrapper[4931]: I0130 05:13:54.499282 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:54 crc kubenswrapper[4931]: I0130 05:13:54.581014 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:54 crc kubenswrapper[4931]: I0130 05:13:54.715053 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:54 crc kubenswrapper[4931]: I0130 05:13:54.715148 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:54 crc kubenswrapper[4931]: I0130 05:13:54.778390 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:55 crc kubenswrapper[4931]: I0130 05:13:55.131738 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 05:13:55 crc kubenswrapper[4931]: I0130 05:13:55.137004 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wn8rd" Jan 30 05:13:56 crc kubenswrapper[4931]: I0130 05:13:56.894202 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:56 crc kubenswrapper[4931]: I0130 05:13:56.894743 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:56 crc kubenswrapper[4931]: I0130 05:13:56.968987 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:57 crc kubenswrapper[4931]: I0130 05:13:57.109512 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:57 crc kubenswrapper[4931]: I0130 05:13:57.109580 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:13:57 crc kubenswrapper[4931]: I0130 05:13:57.168213 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w6b74" Jan 30 05:13:57 crc kubenswrapper[4931]: I0130 05:13:57.364117 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:13:57 crc kubenswrapper[4931]: I0130 05:13:57.364219 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:13:58 crc kubenswrapper[4931]: I0130 05:13:58.171384 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kg222" podUID="4c0c107d-a03c-479f-b127-2824affd9b35" containerName="registry-server" probeResult="failure" output=< Jan 30 05:13:58 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 05:13:58 crc kubenswrapper[4931]: > Jan 30 05:14:07 crc kubenswrapper[4931]: I0130 05:14:07.181637 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:14:07 crc kubenswrapper[4931]: I0130 05:14:07.236195 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kg222" Jan 30 05:14:27 crc kubenswrapper[4931]: I0130 05:14:27.363700 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:14:27 crc kubenswrapper[4931]: I0130 05:14:27.364518 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:14:27 crc kubenswrapper[4931]: I0130 05:14:27.364607 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:14:27 crc kubenswrapper[4931]: I0130 05:14:27.365513 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"595c92fb582df913939826c54f51177c31890e24d0ac56595342acf9749b06a1"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:14:27 crc kubenswrapper[4931]: I0130 05:14:27.365582 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://595c92fb582df913939826c54f51177c31890e24d0ac56595342acf9749b06a1" gracePeriod=600 Jan 30 05:14:28 crc kubenswrapper[4931]: I0130 05:14:28.309479 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="595c92fb582df913939826c54f51177c31890e24d0ac56595342acf9749b06a1" exitCode=0 Jan 30 05:14:28 crc kubenswrapper[4931]: I0130 05:14:28.309575 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"595c92fb582df913939826c54f51177c31890e24d0ac56595342acf9749b06a1"} Jan 30 05:14:28 crc kubenswrapper[4931]: I0130 05:14:28.309915 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"ca23316a7a7a0870cd6ce778a3ddf7b3692d29f58078872d0288efcbee40c2e8"} Jan 30 05:14:28 crc kubenswrapper[4931]: I0130 05:14:28.309968 4931 scope.go:117] "RemoveContainer" containerID="f1f2cffd1648795ac3e32c2c575ebedaff06806cb99eadf1e6def95e47f01d96" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.210457 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb"] Jan 30 05:15:00 crc kubenswrapper[4931]: E0130 05:15:00.211506 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e4a367-9945-4fdb-b5bc-4c8d35512264" containerName="registry" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.211530 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e4a367-9945-4fdb-b5bc-4c8d35512264" containerName="registry" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.211694 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e4a367-9945-4fdb-b5bc-4c8d35512264" containerName="registry" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.212340 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.215094 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.223784 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.231396 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb"] Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.396983 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjfz\" (UniqueName: \"kubernetes.io/projected/2119e7a8-c484-4aef-ac04-c3f82433738d-kube-api-access-bgjfz\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.397333 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2119e7a8-c484-4aef-ac04-c3f82433738d-config-volume\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.397601 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2119e7a8-c484-4aef-ac04-c3f82433738d-secret-volume\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.499268 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2119e7a8-c484-4aef-ac04-c3f82433738d-config-volume\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.499403 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2119e7a8-c484-4aef-ac04-c3f82433738d-secret-volume\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.499480 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjfz\" (UniqueName: \"kubernetes.io/projected/2119e7a8-c484-4aef-ac04-c3f82433738d-kube-api-access-bgjfz\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.500986 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2119e7a8-c484-4aef-ac04-c3f82433738d-config-volume\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.510699 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2119e7a8-c484-4aef-ac04-c3f82433738d-secret-volume\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.529663 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjfz\" (UniqueName: \"kubernetes.io/projected/2119e7a8-c484-4aef-ac04-c3f82433738d-kube-api-access-bgjfz\") pod \"collect-profiles-29495835-kk9gb\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:00 crc kubenswrapper[4931]: I0130 05:15:00.545190 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:01 crc kubenswrapper[4931]: I0130 05:15:01.054172 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb"] Jan 30 05:15:01 crc kubenswrapper[4931]: W0130 05:15:01.066581 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2119e7a8_c484_4aef_ac04_c3f82433738d.slice/crio-94874094e900641691e4daed7904fb2f898cf4600c87bf824f1a717fb45da16c WatchSource:0}: Error finding container 94874094e900641691e4daed7904fb2f898cf4600c87bf824f1a717fb45da16c: Status 404 returned error can't find the container with id 94874094e900641691e4daed7904fb2f898cf4600c87bf824f1a717fb45da16c Jan 30 05:15:01 crc kubenswrapper[4931]: I0130 05:15:01.607855 4931 generic.go:334] "Generic (PLEG): container finished" podID="2119e7a8-c484-4aef-ac04-c3f82433738d" containerID="93024ef1482e0faf5c83b31d25bb0153752fe08f7d8619cc6cdb7d2120e5e084" exitCode=0 Jan 30 05:15:01 crc kubenswrapper[4931]: I0130 05:15:01.607971 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" event={"ID":"2119e7a8-c484-4aef-ac04-c3f82433738d","Type":"ContainerDied","Data":"93024ef1482e0faf5c83b31d25bb0153752fe08f7d8619cc6cdb7d2120e5e084"} Jan 30 05:15:01 crc kubenswrapper[4931]: I0130 05:15:01.608340 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" event={"ID":"2119e7a8-c484-4aef-ac04-c3f82433738d","Type":"ContainerStarted","Data":"94874094e900641691e4daed7904fb2f898cf4600c87bf824f1a717fb45da16c"} Jan 30 05:15:02 crc kubenswrapper[4931]: I0130 05:15:02.956757 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.140839 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2119e7a8-c484-4aef-ac04-c3f82433738d-config-volume\") pod \"2119e7a8-c484-4aef-ac04-c3f82433738d\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.140959 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgjfz\" (UniqueName: \"kubernetes.io/projected/2119e7a8-c484-4aef-ac04-c3f82433738d-kube-api-access-bgjfz\") pod \"2119e7a8-c484-4aef-ac04-c3f82433738d\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.141062 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2119e7a8-c484-4aef-ac04-c3f82433738d-secret-volume\") pod \"2119e7a8-c484-4aef-ac04-c3f82433738d\" (UID: \"2119e7a8-c484-4aef-ac04-c3f82433738d\") " Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.142472 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2119e7a8-c484-4aef-ac04-c3f82433738d-config-volume" (OuterVolumeSpecName: "config-volume") pod "2119e7a8-c484-4aef-ac04-c3f82433738d" (UID: "2119e7a8-c484-4aef-ac04-c3f82433738d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.148370 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2119e7a8-c484-4aef-ac04-c3f82433738d-kube-api-access-bgjfz" (OuterVolumeSpecName: "kube-api-access-bgjfz") pod "2119e7a8-c484-4aef-ac04-c3f82433738d" (UID: "2119e7a8-c484-4aef-ac04-c3f82433738d"). InnerVolumeSpecName "kube-api-access-bgjfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.151762 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2119e7a8-c484-4aef-ac04-c3f82433738d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2119e7a8-c484-4aef-ac04-c3f82433738d" (UID: "2119e7a8-c484-4aef-ac04-c3f82433738d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.243502 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2119e7a8-c484-4aef-ac04-c3f82433738d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.243557 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2119e7a8-c484-4aef-ac04-c3f82433738d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.243577 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgjfz\" (UniqueName: \"kubernetes.io/projected/2119e7a8-c484-4aef-ac04-c3f82433738d-kube-api-access-bgjfz\") on node \"crc\" DevicePath \"\"" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.633126 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" event={"ID":"2119e7a8-c484-4aef-ac04-c3f82433738d","Type":"ContainerDied","Data":"94874094e900641691e4daed7904fb2f898cf4600c87bf824f1a717fb45da16c"} Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.633184 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb" Jan 30 05:15:03 crc kubenswrapper[4931]: I0130 05:15:03.633210 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94874094e900641691e4daed7904fb2f898cf4600c87bf824f1a717fb45da16c" Jan 30 05:16:27 crc kubenswrapper[4931]: I0130 05:16:27.362975 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:16:27 crc kubenswrapper[4931]: I0130 05:16:27.363848 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:16:57 crc kubenswrapper[4931]: I0130 05:16:57.363220 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:16:57 crc kubenswrapper[4931]: I0130 05:16:57.363994 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.363084 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.363848 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.363918 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.364925 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca23316a7a7a0870cd6ce778a3ddf7b3692d29f58078872d0288efcbee40c2e8"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.365074 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://ca23316a7a7a0870cd6ce778a3ddf7b3692d29f58078872d0288efcbee40c2e8" gracePeriod=600 Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.694887 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="ca23316a7a7a0870cd6ce778a3ddf7b3692d29f58078872d0288efcbee40c2e8" exitCode=0 Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.694992 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"ca23316a7a7a0870cd6ce778a3ddf7b3692d29f58078872d0288efcbee40c2e8"} Jan 30 05:17:27 crc kubenswrapper[4931]: I0130 05:17:27.695554 4931 scope.go:117] "RemoveContainer" containerID="595c92fb582df913939826c54f51177c31890e24d0ac56595342acf9749b06a1" Jan 30 05:17:28 crc kubenswrapper[4931]: I0130 05:17:28.709467 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"1794ca6ffdd404c39dffe9fa048526a3a78869de00d876a52d3bd280c8bbc2a2"} Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.554392 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bshbf"] Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.556027 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-controller" containerID="cri-o://61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.556252 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="sbdb" containerID="cri-o://9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.556318 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="nbdb" containerID="cri-o://a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.556374 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="northd" containerID="cri-o://839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.556463 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.556580 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-acl-logging" containerID="cri-o://baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.556904 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-node" containerID="cri-o://a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.622188 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" containerID="cri-o://cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" gracePeriod=30 Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.943514 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/3.log" Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.946122 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovn-acl-logging/0.log" Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.946771 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovn-controller/0.log" Jan 30 05:18:19 crc kubenswrapper[4931]: I0130 05:18:19.947613 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039164 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-s68jr"] Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039534 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039553 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039567 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="sbdb" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039576 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="sbdb" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039586 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039593 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039604 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="nbdb" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039610 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="nbdb" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039624 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039630 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039638 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039644 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039654 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="northd" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039662 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="northd" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039672 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039680 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039691 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039701 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039710 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-acl-logging" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039718 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-acl-logging" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039731 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2119e7a8-c484-4aef-ac04-c3f82433738d" containerName="collect-profiles" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039737 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2119e7a8-c484-4aef-ac04-c3f82433738d" containerName="collect-profiles" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039745 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kubecfg-setup" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039751 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kubecfg-setup" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.039761 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-node" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039768 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-node" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039902 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="sbdb" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039916 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039924 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039934 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="nbdb" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039942 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2119e7a8-c484-4aef-ac04-c3f82433738d" containerName="collect-profiles" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039951 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-node" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039959 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039966 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="northd" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039974 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039980 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovn-acl-logging" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039990 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.039997 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.040139 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.040146 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.040339 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" containerName="ovnkube-controller" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.042563 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.070940 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-bin\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.070992 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-slash\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071043 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-systemd\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071114 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-ovn\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071150 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-node-log\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071207 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-script-lib\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071235 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-log-socket\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071254 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-slash" (OuterVolumeSpecName: "host-slash") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071283 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-openvswitch\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071304 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-ovn-kubernetes\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071320 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071324 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-systemd-units\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071370 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-netd\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071404 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-etc-openvswitch\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071530 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-config\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071559 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-env-overrides\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071583 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-var-lib-cni-networks-ovn-kubernetes\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071628 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-var-lib-openvswitch\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071758 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-netns\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071782 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-kubelet\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071855 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwbjv\" (UniqueName: \"kubernetes.io/projected/556d9fc5-72b4-4134-8074-1e9d07012763-kube-api-access-rwbjv\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.071995 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556d9fc5-72b4-4134-8074-1e9d07012763-ovn-node-metrics-cert\") pod \"556d9fc5-72b4-4134-8074-1e9d07012763\" (UID: \"556d9fc5-72b4-4134-8074-1e9d07012763\") " Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.072063 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.072371 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.072480 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-node-log" (OuterVolumeSpecName: "node-log") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.072625 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073085 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073330 4931 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073365 4931 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073375 4931 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073384 4931 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073394 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073404 4931 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073413 4931 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073596 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-log-socket" (OuterVolumeSpecName: "log-socket") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073620 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073658 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073680 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073697 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073748 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073771 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073808 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.073885 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.074101 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.082568 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556d9fc5-72b4-4134-8074-1e9d07012763-kube-api-access-rwbjv" (OuterVolumeSpecName: "kube-api-access-rwbjv") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "kube-api-access-rwbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.084544 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/2.log" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.085296 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/1.log" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.085353 4931 generic.go:334] "Generic (PLEG): container finished" podID="b17d6adf-e35b-4bf8-9ab2-e6720e595835" containerID="9cbe0bfee502f12e8f2f3a6f1a461efb27353f5529809ccc54fecbb26b304ada" exitCode=2 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.085473 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerDied","Data":"9cbe0bfee502f12e8f2f3a6f1a461efb27353f5529809ccc54fecbb26b304ada"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.085538 4931 scope.go:117] "RemoveContainer" containerID="c041a8359fd83023525663275e8ceb17e995c49053a1ecb2b1822b95caf07eb0" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.085871 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/556d9fc5-72b4-4134-8074-1e9d07012763-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.086399 4931 scope.go:117] "RemoveContainer" containerID="9cbe0bfee502f12e8f2f3a6f1a461efb27353f5529809ccc54fecbb26b304ada" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.086825 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lm7vv_openshift-multus(b17d6adf-e35b-4bf8-9ab2-e6720e595835)\"" pod="openshift-multus/multus-lm7vv" podUID="b17d6adf-e35b-4bf8-9ab2-e6720e595835" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.094994 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovnkube-controller/3.log" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.097014 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "556d9fc5-72b4-4134-8074-1e9d07012763" (UID: "556d9fc5-72b4-4134-8074-1e9d07012763"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.098275 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovn-acl-logging/0.log" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.098890 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bshbf_556d9fc5-72b4-4134-8074-1e9d07012763/ovn-controller/0.log" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099289 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" exitCode=0 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099318 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" exitCode=0 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099328 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" exitCode=0 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099341 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" exitCode=0 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099354 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" exitCode=0 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099366 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" exitCode=0 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099375 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" exitCode=143 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099387 4931 generic.go:334] "Generic (PLEG): container finished" podID="556d9fc5-72b4-4134-8074-1e9d07012763" containerID="61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" exitCode=143 Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099415 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099471 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099487 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099500 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099514 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099526 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099540 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099554 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099563 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099571 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099578 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099586 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099594 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099602 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099610 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099618 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099628 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099640 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099648 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099656 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099664 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099671 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099679 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099686 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099693 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099701 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099708 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099718 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099728 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099736 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099744 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099753 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099760 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099768 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099775 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099783 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099790 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099798 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099808 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" event={"ID":"556d9fc5-72b4-4134-8074-1e9d07012763","Type":"ContainerDied","Data":"ae003bf2c8441af0b322798040d7d0e26c38e678b0b4800e8ee8c379eec9e42a"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099819 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099827 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099835 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099842 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099849 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099857 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099865 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099872 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099879 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.099886 4931 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.100022 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bshbf" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.123032 4931 scope.go:117] "RemoveContainer" containerID="cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.153505 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bshbf"] Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.160053 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bshbf"] Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.160569 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.174990 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovnkube-script-lib\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175073 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-ovn\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175115 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovn-node-metrics-cert\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175149 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-kubelet\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175183 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-slash\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175226 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-log-socket\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175283 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-node-log\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175320 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-systemd-units\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175346 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovnkube-config\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175376 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-env-overrides\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175447 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-run-netns\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.175479 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.176635 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-cni-netd\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.176784 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-systemd\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.176993 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177080 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-etc-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177124 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177156 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-var-lib-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177288 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-cni-bin\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177407 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797v8\" (UniqueName: \"kubernetes.io/projected/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-kube-api-access-797v8\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177633 4931 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177658 4931 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177671 4931 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177684 4931 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177695 4931 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177705 4931 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177716 4931 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177727 4931 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/556d9fc5-72b4-4134-8074-1e9d07012763-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177738 4931 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177748 4931 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177759 4931 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/556d9fc5-72b4-4134-8074-1e9d07012763-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177790 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwbjv\" (UniqueName: \"kubernetes.io/projected/556d9fc5-72b4-4134-8074-1e9d07012763-kube-api-access-rwbjv\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.177802 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/556d9fc5-72b4-4134-8074-1e9d07012763-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.186098 4931 scope.go:117] "RemoveContainer" containerID="9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.209321 4931 scope.go:117] "RemoveContainer" containerID="a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.224828 4931 scope.go:117] "RemoveContainer" containerID="839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.241735 4931 scope.go:117] "RemoveContainer" containerID="42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.259911 4931 scope.go:117] "RemoveContainer" containerID="a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.275078 4931 scope.go:117] "RemoveContainer" containerID="baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279291 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-ovn\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279334 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovn-node-metrics-cert\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279363 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-kubelet\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279386 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-slash\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279415 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-log-socket\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279465 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-node-log\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279487 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-systemd-units\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279508 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovnkube-config\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279528 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-env-overrides\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279556 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-run-netns\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279579 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279602 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-cni-netd\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279631 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-systemd\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279655 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279678 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-etc-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279723 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279747 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-var-lib-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279771 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-cni-bin\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279796 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797v8\" (UniqueName: \"kubernetes.io/projected/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-kube-api-access-797v8\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.279821 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovnkube-script-lib\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280109 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-etc-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280161 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-run-ovn-kubernetes\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280115 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-run-netns\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280253 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280251 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-var-lib-openvswitch\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280235 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280312 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-cni-bin\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280318 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-ovn\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280220 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-run-systemd\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280187 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-cni-netd\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280364 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-kubelet\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280462 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-systemd-units\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280466 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-log-socket\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280530 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-node-log\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280574 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-host-slash\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280660 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovnkube-script-lib\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.280947 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-env-overrides\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.281208 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovnkube-config\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.284279 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-ovn-node-metrics-cert\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.297473 4931 scope.go:117] "RemoveContainer" containerID="61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.312877 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797v8\" (UniqueName: \"kubernetes.io/projected/43fde21b-c04b-428e-a4bb-4f6e4969bd5f-kube-api-access-797v8\") pod \"ovnkube-node-s68jr\" (UID: \"43fde21b-c04b-428e-a4bb-4f6e4969bd5f\") " pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.313386 4931 scope.go:117] "RemoveContainer" containerID="e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.330187 4931 scope.go:117] "RemoveContainer" containerID="cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.330736 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": container with ID starting with cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c not found: ID does not exist" containerID="cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.330820 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} err="failed to get container status \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": rpc error: code = NotFound desc = could not find container \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": container with ID starting with cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.330887 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.331461 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": container with ID starting with 7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c not found: ID does not exist" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.331504 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} err="failed to get container status \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": rpc error: code = NotFound desc = could not find container \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": container with ID starting with 7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.331551 4931 scope.go:117] "RemoveContainer" containerID="9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.332116 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": container with ID starting with 9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8 not found: ID does not exist" containerID="9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.332172 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} err="failed to get container status \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": rpc error: code = NotFound desc = could not find container \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": container with ID starting with 9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.332212 4931 scope.go:117] "RemoveContainer" containerID="a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.332677 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": container with ID starting with a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29 not found: ID does not exist" containerID="a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.332704 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} err="failed to get container status \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": rpc error: code = NotFound desc = could not find container \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": container with ID starting with a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.332727 4931 scope.go:117] "RemoveContainer" containerID="839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.333061 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": container with ID starting with 839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512 not found: ID does not exist" containerID="839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.333111 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} err="failed to get container status \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": rpc error: code = NotFound desc = could not find container \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": container with ID starting with 839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.333143 4931 scope.go:117] "RemoveContainer" containerID="42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.333581 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": container with ID starting with 42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3 not found: ID does not exist" containerID="42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.333606 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} err="failed to get container status \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": rpc error: code = NotFound desc = could not find container \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": container with ID starting with 42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.333623 4931 scope.go:117] "RemoveContainer" containerID="a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.333853 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": container with ID starting with a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b not found: ID does not exist" containerID="a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.333871 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} err="failed to get container status \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": rpc error: code = NotFound desc = could not find container \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": container with ID starting with a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.333888 4931 scope.go:117] "RemoveContainer" containerID="baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.334248 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": container with ID starting with baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8 not found: ID does not exist" containerID="baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.334268 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} err="failed to get container status \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": rpc error: code = NotFound desc = could not find container \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": container with ID starting with baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.334282 4931 scope.go:117] "RemoveContainer" containerID="61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.334701 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": container with ID starting with 61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0 not found: ID does not exist" containerID="61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.334738 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} err="failed to get container status \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": rpc error: code = NotFound desc = could not find container \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": container with ID starting with 61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.334758 4931 scope.go:117] "RemoveContainer" containerID="e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929" Jan 30 05:18:20 crc kubenswrapper[4931]: E0130 05:18:20.335099 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": container with ID starting with e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929 not found: ID does not exist" containerID="e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.335155 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} err="failed to get container status \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": rpc error: code = NotFound desc = could not find container \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": container with ID starting with e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.335185 4931 scope.go:117] "RemoveContainer" containerID="cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.335591 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} err="failed to get container status \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": rpc error: code = NotFound desc = could not find container \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": container with ID starting with cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.335617 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.335967 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} err="failed to get container status \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": rpc error: code = NotFound desc = could not find container \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": container with ID starting with 7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.335999 4931 scope.go:117] "RemoveContainer" containerID="9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.336366 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} err="failed to get container status \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": rpc error: code = NotFound desc = could not find container \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": container with ID starting with 9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.336402 4931 scope.go:117] "RemoveContainer" containerID="a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.336794 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} err="failed to get container status \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": rpc error: code = NotFound desc = could not find container \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": container with ID starting with a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.336819 4931 scope.go:117] "RemoveContainer" containerID="839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.337534 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} err="failed to get container status \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": rpc error: code = NotFound desc = could not find container \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": container with ID starting with 839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.337572 4931 scope.go:117] "RemoveContainer" containerID="42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.337916 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} err="failed to get container status \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": rpc error: code = NotFound desc = could not find container \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": container with ID starting with 42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.337942 4931 scope.go:117] "RemoveContainer" containerID="a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.338293 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} err="failed to get container status \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": rpc error: code = NotFound desc = could not find container \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": container with ID starting with a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.338329 4931 scope.go:117] "RemoveContainer" containerID="baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.338744 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} err="failed to get container status \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": rpc error: code = NotFound desc = could not find container \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": container with ID starting with baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.338792 4931 scope.go:117] "RemoveContainer" containerID="61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.339155 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} err="failed to get container status \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": rpc error: code = NotFound desc = could not find container \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": container with ID starting with 61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.339183 4931 scope.go:117] "RemoveContainer" containerID="e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.339509 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} err="failed to get container status \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": rpc error: code = NotFound desc = could not find container \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": container with ID starting with e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.339541 4931 scope.go:117] "RemoveContainer" containerID="cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.339853 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} err="failed to get container status \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": rpc error: code = NotFound desc = could not find container \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": container with ID starting with cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.339884 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.340191 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} err="failed to get container status \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": rpc error: code = NotFound desc = could not find container \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": container with ID starting with 7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.340222 4931 scope.go:117] "RemoveContainer" containerID="9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.340560 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} err="failed to get container status \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": rpc error: code = NotFound desc = could not find container \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": container with ID starting with 9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.340599 4931 scope.go:117] "RemoveContainer" containerID="a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.340954 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} err="failed to get container status \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": rpc error: code = NotFound desc = could not find container \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": container with ID starting with a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.340979 4931 scope.go:117] "RemoveContainer" containerID="839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.341472 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} err="failed to get container status \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": rpc error: code = NotFound desc = could not find container \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": container with ID starting with 839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.341564 4931 scope.go:117] "RemoveContainer" containerID="42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.342099 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} err="failed to get container status \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": rpc error: code = NotFound desc = could not find container \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": container with ID starting with 42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.342123 4931 scope.go:117] "RemoveContainer" containerID="a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.342707 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} err="failed to get container status \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": rpc error: code = NotFound desc = could not find container \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": container with ID starting with a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.342734 4931 scope.go:117] "RemoveContainer" containerID="baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.343050 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} err="failed to get container status \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": rpc error: code = NotFound desc = could not find container \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": container with ID starting with baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.343081 4931 scope.go:117] "RemoveContainer" containerID="61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.343398 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} err="failed to get container status \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": rpc error: code = NotFound desc = could not find container \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": container with ID starting with 61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.343441 4931 scope.go:117] "RemoveContainer" containerID="e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.343757 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} err="failed to get container status \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": rpc error: code = NotFound desc = could not find container \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": container with ID starting with e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.343787 4931 scope.go:117] "RemoveContainer" containerID="cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.344088 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c"} err="failed to get container status \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": rpc error: code = NotFound desc = could not find container \"cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c\": container with ID starting with cd400fd007bec7543ab705386136ac794f8c4b56a2679a370f013421aa13560c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.344141 4931 scope.go:117] "RemoveContainer" containerID="7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.344445 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c"} err="failed to get container status \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": rpc error: code = NotFound desc = could not find container \"7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c\": container with ID starting with 7ef95db790dfeb20be4030c02df79f432dee70d8b4f237ec1ec01553b5c1f98c not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.344471 4931 scope.go:117] "RemoveContainer" containerID="9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.344762 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8"} err="failed to get container status \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": rpc error: code = NotFound desc = could not find container \"9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8\": container with ID starting with 9306b8ab9acd91e0b773b5e755ba4c868d5dbf2863dae9eed3edf578a3b5bbb8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.344791 4931 scope.go:117] "RemoveContainer" containerID="a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.345160 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29"} err="failed to get container status \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": rpc error: code = NotFound desc = could not find container \"a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29\": container with ID starting with a8e71ffb4316ce6909502b021dd80cc5d80c7ce1f480eaeb11a6a4198c0e7b29 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.345215 4931 scope.go:117] "RemoveContainer" containerID="839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.345593 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512"} err="failed to get container status \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": rpc error: code = NotFound desc = could not find container \"839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512\": container with ID starting with 839a02e2f55c01661faa021e3705a8969ef395cd9fbbce16ee22b3e4a1c82512 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.345618 4931 scope.go:117] "RemoveContainer" containerID="42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.345913 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3"} err="failed to get container status \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": rpc error: code = NotFound desc = could not find container \"42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3\": container with ID starting with 42be2b3ccb29f66b8f4ff7c6c5daa729046e541adf8447dac6987339efabbbd3 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.345940 4931 scope.go:117] "RemoveContainer" containerID="a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.346209 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b"} err="failed to get container status \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": rpc error: code = NotFound desc = could not find container \"a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b\": container with ID starting with a8837cc5d6b53dfbeeb0b552e656854dd0111ff1e35ff8fdf92c418fe3870b7b not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.346235 4931 scope.go:117] "RemoveContainer" containerID="baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.346587 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8"} err="failed to get container status \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": rpc error: code = NotFound desc = could not find container \"baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8\": container with ID starting with baa376478cdeb9a9f72f7c08d46154891a0e1b1d67d40ed2b31e48b3275806a8 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.346617 4931 scope.go:117] "RemoveContainer" containerID="61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.346895 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0"} err="failed to get container status \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": rpc error: code = NotFound desc = could not find container \"61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0\": container with ID starting with 61e4e9b07acc1ee07042237be757b14f1b8680e2bdae3a4c64390e055b00bfb0 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.346935 4931 scope.go:117] "RemoveContainer" containerID="e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.347335 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929"} err="failed to get container status \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": rpc error: code = NotFound desc = could not find container \"e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929\": container with ID starting with e4e6ad71fc07be5e4f4786c4302cf0df65534936f3d55a6a2976a8fa9f15d929 not found: ID does not exist" Jan 30 05:18:20 crc kubenswrapper[4931]: I0130 05:18:20.359794 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:21 crc kubenswrapper[4931]: I0130 05:18:21.110459 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/2.log" Jan 30 05:18:21 crc kubenswrapper[4931]: I0130 05:18:21.116879 4931 generic.go:334] "Generic (PLEG): container finished" podID="43fde21b-c04b-428e-a4bb-4f6e4969bd5f" containerID="4a4975462ab619a6fe83eeec32e381f839f4a0b39193714651129b6f8513eea2" exitCode=0 Jan 30 05:18:21 crc kubenswrapper[4931]: I0130 05:18:21.116937 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerDied","Data":"4a4975462ab619a6fe83eeec32e381f839f4a0b39193714651129b6f8513eea2"} Jan 30 05:18:21 crc kubenswrapper[4931]: I0130 05:18:21.116980 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"448490ef9d880fe87b6b86021fd5d8abf9e1ea792e944858594d2bfebb08c624"} Jan 30 05:18:21 crc kubenswrapper[4931]: I0130 05:18:21.431091 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="556d9fc5-72b4-4134-8074-1e9d07012763" path="/var/lib/kubelet/pods/556d9fc5-72b4-4134-8074-1e9d07012763/volumes" Jan 30 05:18:22 crc kubenswrapper[4931]: I0130 05:18:22.127574 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"7765234d2d4274b4269d0d1225cb6927813c464396ba77aec3ae59ff4dea7ac1"} Jan 30 05:18:22 crc kubenswrapper[4931]: I0130 05:18:22.127977 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"bba9b8c2003f76cb846598347185800b35b93599340fe987706c8a31e115cdde"} Jan 30 05:18:22 crc kubenswrapper[4931]: I0130 05:18:22.127998 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"aa67ee282b6ac130c8c72c8dcde4cbb3d54b1a96cbfb0c2927617f89f7096859"} Jan 30 05:18:22 crc kubenswrapper[4931]: I0130 05:18:22.128017 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"cfa995b9fb3214346aa12c010b006cd6c0dfd6dd8d4da1ca01a02a72cbc91337"} Jan 30 05:18:22 crc kubenswrapper[4931]: I0130 05:18:22.128036 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"917280023b05a7246efa0dd53af4fa0193bd41e1f06b879b46889c99f98f88c9"} Jan 30 05:18:22 crc kubenswrapper[4931]: I0130 05:18:22.128054 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"adbc78a70d9bb6a1934105580e943aef2e50b51b8a076e9f8c8a63802e5a552c"} Jan 30 05:18:25 crc kubenswrapper[4931]: I0130 05:18:25.159271 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"eafd81b26ecd62c7e0c251b2c5c1c810c1a880314f61f58fb274c6c1bc655810"} Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.770713 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-mlqzd"] Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.771887 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.774201 4931 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-ff66z" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.775208 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.775388 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.775890 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.883384 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f395498-8955-4aa5-b283-62e5b12505f1-node-mnt\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.883611 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vf8z\" (UniqueName: \"kubernetes.io/projected/7f395498-8955-4aa5-b283-62e5b12505f1-kube-api-access-9vf8z\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.883688 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f395498-8955-4aa5-b283-62e5b12505f1-crc-storage\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.984905 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f395498-8955-4aa5-b283-62e5b12505f1-node-mnt\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.985266 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vf8z\" (UniqueName: \"kubernetes.io/projected/7f395498-8955-4aa5-b283-62e5b12505f1-kube-api-access-9vf8z\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.985288 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f395498-8955-4aa5-b283-62e5b12505f1-crc-storage\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.985678 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f395498-8955-4aa5-b283-62e5b12505f1-node-mnt\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:26 crc kubenswrapper[4931]: I0130 05:18:26.986132 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f395498-8955-4aa5-b283-62e5b12505f1-crc-storage\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.011785 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vf8z\" (UniqueName: \"kubernetes.io/projected/7f395498-8955-4aa5-b283-62e5b12505f1-kube-api-access-9vf8z\") pod \"crc-storage-crc-mlqzd\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.092074 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.113376 4931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(737c11a956ee077a82213f856be422791ad108dda5dbd2b38e8b7724d86caa2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.113458 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(737c11a956ee077a82213f856be422791ad108dda5dbd2b38e8b7724d86caa2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.113482 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(737c11a956ee077a82213f856be422791ad108dda5dbd2b38e8b7724d86caa2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.113545 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(737c11a956ee077a82213f856be422791ad108dda5dbd2b38e8b7724d86caa2e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-mlqzd" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.202828 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" event={"ID":"43fde21b-c04b-428e-a4bb-4f6e4969bd5f","Type":"ContainerStarted","Data":"f494d142d3810a8c02eb414cfcd8f08530c3d20aeb73cd663fd224f262416510"} Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.203453 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.203530 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.203571 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.241123 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.242589 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.245605 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" podStartSLOduration=7.245583745 podStartE2EDuration="7.245583745s" podCreationTimestamp="2026-01-30 05:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:18:27.241410226 +0000 UTC m=+642.611320503" watchObservedRunningTime="2026-01-30 05:18:27.245583745 +0000 UTC m=+642.615494002" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.492937 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mlqzd"] Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.493516 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: I0130 05:18:27.494316 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.520717 4931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(40ec2700f34632fc2691d55591e0d01e9f867adc960b314c3cde6c470e1c0e87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.520834 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(40ec2700f34632fc2691d55591e0d01e9f867adc960b314c3cde6c470e1c0e87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.520867 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(40ec2700f34632fc2691d55591e0d01e9f867adc960b314c3cde6c470e1c0e87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:27 crc kubenswrapper[4931]: E0130 05:18:27.520951 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(40ec2700f34632fc2691d55591e0d01e9f867adc960b314c3cde6c470e1c0e87): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-mlqzd" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" Jan 30 05:18:35 crc kubenswrapper[4931]: I0130 05:18:35.426998 4931 scope.go:117] "RemoveContainer" containerID="9cbe0bfee502f12e8f2f3a6f1a461efb27353f5529809ccc54fecbb26b304ada" Jan 30 05:18:35 crc kubenswrapper[4931]: E0130 05:18:35.428056 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lm7vv_openshift-multus(b17d6adf-e35b-4bf8-9ab2-e6720e595835)\"" pod="openshift-multus/multus-lm7vv" podUID="b17d6adf-e35b-4bf8-9ab2-e6720e595835" Jan 30 05:18:38 crc kubenswrapper[4931]: I0130 05:18:38.420987 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:38 crc kubenswrapper[4931]: I0130 05:18:38.421778 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:38 crc kubenswrapper[4931]: E0130 05:18:38.461527 4931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(422641cfdadcfe112831ce54cf208175350f6941b2c889305fecd5510bf2deed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 05:18:38 crc kubenswrapper[4931]: E0130 05:18:38.461624 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(422641cfdadcfe112831ce54cf208175350f6941b2c889305fecd5510bf2deed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:38 crc kubenswrapper[4931]: E0130 05:18:38.461659 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(422641cfdadcfe112831ce54cf208175350f6941b2c889305fecd5510bf2deed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:38 crc kubenswrapper[4931]: E0130 05:18:38.461739 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(422641cfdadcfe112831ce54cf208175350f6941b2c889305fecd5510bf2deed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-mlqzd" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" Jan 30 05:18:48 crc kubenswrapper[4931]: I0130 05:18:48.422191 4931 scope.go:117] "RemoveContainer" containerID="9cbe0bfee502f12e8f2f3a6f1a461efb27353f5529809ccc54fecbb26b304ada" Jan 30 05:18:49 crc kubenswrapper[4931]: I0130 05:18:49.376047 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lm7vv_b17d6adf-e35b-4bf8-9ab2-e6720e595835/kube-multus/2.log" Jan 30 05:18:49 crc kubenswrapper[4931]: I0130 05:18:49.376520 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lm7vv" event={"ID":"b17d6adf-e35b-4bf8-9ab2-e6720e595835","Type":"ContainerStarted","Data":"60801b60c842bc20aee7bc70499d177ba3a056474ac483bd7ce3e22f46834e1d"} Jan 30 05:18:49 crc kubenswrapper[4931]: I0130 05:18:49.421258 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:49 crc kubenswrapper[4931]: I0130 05:18:49.421904 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:49 crc kubenswrapper[4931]: E0130 05:18:49.487486 4931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(e0beac559643fc87ac54071364a74e16a75eb767f0589d87e2a18090fb0c942e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 05:18:49 crc kubenswrapper[4931]: E0130 05:18:49.487609 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(e0beac559643fc87ac54071364a74e16a75eb767f0589d87e2a18090fb0c942e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:49 crc kubenswrapper[4931]: E0130 05:18:49.487663 4931 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(e0beac559643fc87ac54071364a74e16a75eb767f0589d87e2a18090fb0c942e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:18:49 crc kubenswrapper[4931]: E0130 05:18:49.487773 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-mlqzd_crc-storage(7f395498-8955-4aa5-b283-62e5b12505f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-mlqzd_crc-storage_7f395498-8955-4aa5-b283-62e5b12505f1_0(e0beac559643fc87ac54071364a74e16a75eb767f0589d87e2a18090fb0c942e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-mlqzd" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" Jan 30 05:18:50 crc kubenswrapper[4931]: I0130 05:18:50.400654 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-s68jr" Jan 30 05:19:00 crc kubenswrapper[4931]: I0130 05:19:00.421139 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:19:00 crc kubenswrapper[4931]: I0130 05:19:00.422393 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:19:00 crc kubenswrapper[4931]: I0130 05:19:00.688567 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mlqzd"] Jan 30 05:19:00 crc kubenswrapper[4931]: I0130 05:19:00.703017 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 05:19:01 crc kubenswrapper[4931]: I0130 05:19:01.493568 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mlqzd" event={"ID":"7f395498-8955-4aa5-b283-62e5b12505f1","Type":"ContainerStarted","Data":"a5bbe8e3c7fb83e204f7e74e58ca6a9e3036aba44081e4acf435a3516fa09a86"} Jan 30 05:19:02 crc kubenswrapper[4931]: I0130 05:19:02.508288 4931 generic.go:334] "Generic (PLEG): container finished" podID="7f395498-8955-4aa5-b283-62e5b12505f1" containerID="ceeb8bcdff334f1b3490e1ee30443dff7dd6fd17a3f2d90428a1f38ad6f3cd5e" exitCode=0 Jan 30 05:19:02 crc kubenswrapper[4931]: I0130 05:19:02.508507 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mlqzd" event={"ID":"7f395498-8955-4aa5-b283-62e5b12505f1","Type":"ContainerDied","Data":"ceeb8bcdff334f1b3490e1ee30443dff7dd6fd17a3f2d90428a1f38ad6f3cd5e"} Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.837671 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.874359 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f395498-8955-4aa5-b283-62e5b12505f1-node-mnt\") pod \"7f395498-8955-4aa5-b283-62e5b12505f1\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.874415 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f395498-8955-4aa5-b283-62e5b12505f1-crc-storage\") pod \"7f395498-8955-4aa5-b283-62e5b12505f1\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.874450 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vf8z\" (UniqueName: \"kubernetes.io/projected/7f395498-8955-4aa5-b283-62e5b12505f1-kube-api-access-9vf8z\") pod \"7f395498-8955-4aa5-b283-62e5b12505f1\" (UID: \"7f395498-8955-4aa5-b283-62e5b12505f1\") " Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.874556 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f395498-8955-4aa5-b283-62e5b12505f1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "7f395498-8955-4aa5-b283-62e5b12505f1" (UID: "7f395498-8955-4aa5-b283-62e5b12505f1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.880951 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f395498-8955-4aa5-b283-62e5b12505f1-kube-api-access-9vf8z" (OuterVolumeSpecName: "kube-api-access-9vf8z") pod "7f395498-8955-4aa5-b283-62e5b12505f1" (UID: "7f395498-8955-4aa5-b283-62e5b12505f1"). InnerVolumeSpecName "kube-api-access-9vf8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.896323 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f395498-8955-4aa5-b283-62e5b12505f1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "7f395498-8955-4aa5-b283-62e5b12505f1" (UID: "7f395498-8955-4aa5-b283-62e5b12505f1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.975628 4931 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f395498-8955-4aa5-b283-62e5b12505f1-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.975670 4931 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f395498-8955-4aa5-b283-62e5b12505f1-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 05:19:03 crc kubenswrapper[4931]: I0130 05:19:03.975684 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vf8z\" (UniqueName: \"kubernetes.io/projected/7f395498-8955-4aa5-b283-62e5b12505f1-kube-api-access-9vf8z\") on node \"crc\" DevicePath \"\"" Jan 30 05:19:04 crc kubenswrapper[4931]: I0130 05:19:04.525994 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mlqzd" event={"ID":"7f395498-8955-4aa5-b283-62e5b12505f1","Type":"ContainerDied","Data":"a5bbe8e3c7fb83e204f7e74e58ca6a9e3036aba44081e4acf435a3516fa09a86"} Jan 30 05:19:04 crc kubenswrapper[4931]: I0130 05:19:04.526305 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5bbe8e3c7fb83e204f7e74e58ca6a9e3036aba44081e4acf435a3516fa09a86" Jan 30 05:19:04 crc kubenswrapper[4931]: I0130 05:19:04.526121 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mlqzd" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.208906 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4"] Jan 30 05:19:12 crc kubenswrapper[4931]: E0130 05:19:12.209724 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" containerName="storage" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.209744 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" containerName="storage" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.209929 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" containerName="storage" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.211069 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.213499 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.224384 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4"] Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.338876 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.339246 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.339327 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpd46\" (UniqueName: \"kubernetes.io/projected/52241d6a-5526-4d2b-baeb-e1fd0361a188-kube-api-access-lpd46\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.441487 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.441575 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpd46\" (UniqueName: \"kubernetes.io/projected/52241d6a-5526-4d2b-baeb-e1fd0361a188-kube-api-access-lpd46\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.441642 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.442402 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.442972 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.475777 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpd46\" (UniqueName: \"kubernetes.io/projected/52241d6a-5526-4d2b-baeb-e1fd0361a188-kube-api-access-lpd46\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:12 crc kubenswrapper[4931]: I0130 05:19:12.537569 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:13 crc kubenswrapper[4931]: I0130 05:19:13.015710 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4"] Jan 30 05:19:13 crc kubenswrapper[4931]: I0130 05:19:13.589747 4931 generic.go:334] "Generic (PLEG): container finished" podID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerID="9e96d1070728e64bf500446afd3ca1d2228d224750611a62c09395ec57bc5ab0" exitCode=0 Jan 30 05:19:13 crc kubenswrapper[4931]: I0130 05:19:13.589861 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" event={"ID":"52241d6a-5526-4d2b-baeb-e1fd0361a188","Type":"ContainerDied","Data":"9e96d1070728e64bf500446afd3ca1d2228d224750611a62c09395ec57bc5ab0"} Jan 30 05:19:13 crc kubenswrapper[4931]: I0130 05:19:13.590207 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" event={"ID":"52241d6a-5526-4d2b-baeb-e1fd0361a188","Type":"ContainerStarted","Data":"ef6f7ae52213ba6bff1be6d319e5c662728138a3bc0b1903a9f9435f2e6d5101"} Jan 30 05:19:15 crc kubenswrapper[4931]: I0130 05:19:15.606184 4931 generic.go:334] "Generic (PLEG): container finished" podID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerID="b6f970948e85c4ef45a48cc9b5b4ef2675ec23befd30a8e3faeb7d9d92cc97b5" exitCode=0 Jan 30 05:19:15 crc kubenswrapper[4931]: I0130 05:19:15.606476 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" event={"ID":"52241d6a-5526-4d2b-baeb-e1fd0361a188","Type":"ContainerDied","Data":"b6f970948e85c4ef45a48cc9b5b4ef2675ec23befd30a8e3faeb7d9d92cc97b5"} Jan 30 05:19:16 crc kubenswrapper[4931]: I0130 05:19:16.617852 4931 generic.go:334] "Generic (PLEG): container finished" podID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerID="2590304120f8b42483614a45981a2e4b17d4d576509eea7818260063069d09e5" exitCode=0 Jan 30 05:19:16 crc kubenswrapper[4931]: I0130 05:19:16.618080 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" event={"ID":"52241d6a-5526-4d2b-baeb-e1fd0361a188","Type":"ContainerDied","Data":"2590304120f8b42483614a45981a2e4b17d4d576509eea7818260063069d09e5"} Jan 30 05:19:17 crc kubenswrapper[4931]: I0130 05:19:17.955202 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.017043 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpd46\" (UniqueName: \"kubernetes.io/projected/52241d6a-5526-4d2b-baeb-e1fd0361a188-kube-api-access-lpd46\") pod \"52241d6a-5526-4d2b-baeb-e1fd0361a188\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.017192 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-bundle\") pod \"52241d6a-5526-4d2b-baeb-e1fd0361a188\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.017263 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-util\") pod \"52241d6a-5526-4d2b-baeb-e1fd0361a188\" (UID: \"52241d6a-5526-4d2b-baeb-e1fd0361a188\") " Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.018717 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-bundle" (OuterVolumeSpecName: "bundle") pod "52241d6a-5526-4d2b-baeb-e1fd0361a188" (UID: "52241d6a-5526-4d2b-baeb-e1fd0361a188"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.023762 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52241d6a-5526-4d2b-baeb-e1fd0361a188-kube-api-access-lpd46" (OuterVolumeSpecName: "kube-api-access-lpd46") pod "52241d6a-5526-4d2b-baeb-e1fd0361a188" (UID: "52241d6a-5526-4d2b-baeb-e1fd0361a188"). InnerVolumeSpecName "kube-api-access-lpd46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.056439 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-util" (OuterVolumeSpecName: "util") pod "52241d6a-5526-4d2b-baeb-e1fd0361a188" (UID: "52241d6a-5526-4d2b-baeb-e1fd0361a188"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.119471 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.119535 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52241d6a-5526-4d2b-baeb-e1fd0361a188-util\") on node \"crc\" DevicePath \"\"" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.119560 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpd46\" (UniqueName: \"kubernetes.io/projected/52241d6a-5526-4d2b-baeb-e1fd0361a188-kube-api-access-lpd46\") on node \"crc\" DevicePath \"\"" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.635369 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" event={"ID":"52241d6a-5526-4d2b-baeb-e1fd0361a188","Type":"ContainerDied","Data":"ef6f7ae52213ba6bff1be6d319e5c662728138a3bc0b1903a9f9435f2e6d5101"} Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.635450 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6f7ae52213ba6bff1be6d319e5c662728138a3bc0b1903a9f9435f2e6d5101" Jan 30 05:19:18 crc kubenswrapper[4931]: I0130 05:19:18.635543 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.697135 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5tdhq"] Jan 30 05:19:20 crc kubenswrapper[4931]: E0130 05:19:20.697574 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerName="util" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.697585 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerName="util" Jan 30 05:19:20 crc kubenswrapper[4931]: E0130 05:19:20.697597 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerName="pull" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.697603 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerName="pull" Jan 30 05:19:20 crc kubenswrapper[4931]: E0130 05:19:20.697612 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerName="extract" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.697618 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerName="extract" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.697700 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52241d6a-5526-4d2b-baeb-e1fd0361a188" containerName="extract" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.698060 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.699643 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-gxxfj" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.700028 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.700572 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.708414 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5tdhq"] Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.756482 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv9xw\" (UniqueName: \"kubernetes.io/projected/1c291268-6fc4-48a1-94dc-1e9e052e7bc6-kube-api-access-zv9xw\") pod \"nmstate-operator-646758c888-5tdhq\" (UID: \"1c291268-6fc4-48a1-94dc-1e9e052e7bc6\") " pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.857902 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv9xw\" (UniqueName: \"kubernetes.io/projected/1c291268-6fc4-48a1-94dc-1e9e052e7bc6-kube-api-access-zv9xw\") pod \"nmstate-operator-646758c888-5tdhq\" (UID: \"1c291268-6fc4-48a1-94dc-1e9e052e7bc6\") " pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" Jan 30 05:19:20 crc kubenswrapper[4931]: I0130 05:19:20.878479 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv9xw\" (UniqueName: \"kubernetes.io/projected/1c291268-6fc4-48a1-94dc-1e9e052e7bc6-kube-api-access-zv9xw\") pod \"nmstate-operator-646758c888-5tdhq\" (UID: \"1c291268-6fc4-48a1-94dc-1e9e052e7bc6\") " pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" Jan 30 05:19:21 crc kubenswrapper[4931]: I0130 05:19:21.012057 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" Jan 30 05:19:21 crc kubenswrapper[4931]: I0130 05:19:21.283992 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5tdhq"] Jan 30 05:19:21 crc kubenswrapper[4931]: W0130 05:19:21.289938 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c291268_6fc4_48a1_94dc_1e9e052e7bc6.slice/crio-175c92fcf7f5771dd798b8c59937aa27735d46da1aa93fcff2dd15cbb070211d WatchSource:0}: Error finding container 175c92fcf7f5771dd798b8c59937aa27735d46da1aa93fcff2dd15cbb070211d: Status 404 returned error can't find the container with id 175c92fcf7f5771dd798b8c59937aa27735d46da1aa93fcff2dd15cbb070211d Jan 30 05:19:21 crc kubenswrapper[4931]: I0130 05:19:21.655816 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" event={"ID":"1c291268-6fc4-48a1-94dc-1e9e052e7bc6","Type":"ContainerStarted","Data":"175c92fcf7f5771dd798b8c59937aa27735d46da1aa93fcff2dd15cbb070211d"} Jan 30 05:19:24 crc kubenswrapper[4931]: I0130 05:19:24.684767 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" event={"ID":"1c291268-6fc4-48a1-94dc-1e9e052e7bc6","Type":"ContainerStarted","Data":"621677c1aef1ee0502a74c45f6339f605cec24c4c5cc5583afa3ef7b97829907"} Jan 30 05:19:24 crc kubenswrapper[4931]: I0130 05:19:24.717083 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-5tdhq" podStartSLOduration=2.499103366 podStartE2EDuration="4.717056583s" podCreationTimestamp="2026-01-30 05:19:20 +0000 UTC" firstStartedPulling="2026-01-30 05:19:21.292055165 +0000 UTC m=+696.661965432" lastFinishedPulling="2026-01-30 05:19:23.510008392 +0000 UTC m=+698.879918649" observedRunningTime="2026-01-30 05:19:24.711310456 +0000 UTC m=+700.081220753" watchObservedRunningTime="2026-01-30 05:19:24.717056583 +0000 UTC m=+700.086966880" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.709516 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2z4jr"] Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.710361 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.722213 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2z4jr"] Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.727853 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l"] Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.728729 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.729974 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.733026 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-fp7dl" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.752709 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l"] Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.789173 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-6mhzq"] Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.789819 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.823453 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llc2h\" (UniqueName: \"kubernetes.io/projected/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-kube-api-access-llc2h\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.823524 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-ovs-socket\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.823546 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-nmstate-lock\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.823572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a115b68a-a9ad-44db-90f5-1f016556956a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-krf2l\" (UID: \"a115b68a-a9ad-44db-90f5-1f016556956a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.823603 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6sc\" (UniqueName: \"kubernetes.io/projected/01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c-kube-api-access-sc6sc\") pod \"nmstate-metrics-54757c584b-2z4jr\" (UID: \"01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.823628 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-dbus-socket\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.823660 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flztx\" (UniqueName: \"kubernetes.io/projected/a115b68a-a9ad-44db-90f5-1f016556956a-kube-api-access-flztx\") pod \"nmstate-webhook-8474b5b9d8-krf2l\" (UID: \"a115b68a-a9ad-44db-90f5-1f016556956a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.865614 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2"] Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.866471 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.871069 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.871397 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.871540 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-khtrv" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.872976 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2"] Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.924971 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-dbus-socket\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925060 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8800ae15-51ee-4310-889d-3608008986bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925130 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flztx\" (UniqueName: \"kubernetes.io/projected/a115b68a-a9ad-44db-90f5-1f016556956a-kube-api-access-flztx\") pod \"nmstate-webhook-8474b5b9d8-krf2l\" (UID: \"a115b68a-a9ad-44db-90f5-1f016556956a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925161 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hk8n\" (UniqueName: \"kubernetes.io/projected/8800ae15-51ee-4310-889d-3608008986bd-kube-api-access-7hk8n\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925285 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llc2h\" (UniqueName: \"kubernetes.io/projected/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-kube-api-access-llc2h\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925295 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-dbus-socket\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925320 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8800ae15-51ee-4310-889d-3608008986bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925378 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-ovs-socket\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925408 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-nmstate-lock\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925475 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a115b68a-a9ad-44db-90f5-1f016556956a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-krf2l\" (UID: \"a115b68a-a9ad-44db-90f5-1f016556956a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925498 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6sc\" (UniqueName: \"kubernetes.io/projected/01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c-kube-api-access-sc6sc\") pod \"nmstate-metrics-54757c584b-2z4jr\" (UID: \"01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925603 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-ovs-socket\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.925660 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-nmstate-lock\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.932934 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a115b68a-a9ad-44db-90f5-1f016556956a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-krf2l\" (UID: \"a115b68a-a9ad-44db-90f5-1f016556956a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.939511 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6sc\" (UniqueName: \"kubernetes.io/projected/01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c-kube-api-access-sc6sc\") pod \"nmstate-metrics-54757c584b-2z4jr\" (UID: \"01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.939519 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llc2h\" (UniqueName: \"kubernetes.io/projected/66e77bed-ca3a-4cfe-874c-d6874c52ab0e-kube-api-access-llc2h\") pod \"nmstate-handler-6mhzq\" (UID: \"66e77bed-ca3a-4cfe-874c-d6874c52ab0e\") " pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:25 crc kubenswrapper[4931]: I0130 05:19:25.940231 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flztx\" (UniqueName: \"kubernetes.io/projected/a115b68a-a9ad-44db-90f5-1f016556956a-kube-api-access-flztx\") pod \"nmstate-webhook-8474b5b9d8-krf2l\" (UID: \"a115b68a-a9ad-44db-90f5-1f016556956a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.026231 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8800ae15-51ee-4310-889d-3608008986bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:26 crc kubenswrapper[4931]: E0130 05:19:26.026555 4931 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 30 05:19:26 crc kubenswrapper[4931]: E0130 05:19:26.026653 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8800ae15-51ee-4310-889d-3608008986bd-plugin-serving-cert podName:8800ae15-51ee-4310-889d-3608008986bd nodeName:}" failed. No retries permitted until 2026-01-30 05:19:26.526627577 +0000 UTC m=+701.896537884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/8800ae15-51ee-4310-889d-3608008986bd-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-mwzz2" (UID: "8800ae15-51ee-4310-889d-3608008986bd") : secret "plugin-serving-cert" not found Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.026715 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hk8n\" (UniqueName: \"kubernetes.io/projected/8800ae15-51ee-4310-889d-3608008986bd-kube-api-access-7hk8n\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.026879 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8800ae15-51ee-4310-889d-3608008986bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.027637 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.027861 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8800ae15-51ee-4310-889d-3608008986bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.045731 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.046677 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hk8n\" (UniqueName: \"kubernetes.io/projected/8800ae15-51ee-4310-889d-3608008986bd-kube-api-access-7hk8n\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.055554 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5977cc965f-tjfns"] Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.060310 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.070074 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5977cc965f-tjfns"] Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.102708 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.127538 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-oauth-serving-cert\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.127586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j62gt\" (UniqueName: \"kubernetes.io/projected/5ce22043-f6b4-4294-8522-339a87e7b68a-kube-api-access-j62gt\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.127604 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-service-ca\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.127628 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ce22043-f6b4-4294-8522-339a87e7b68a-console-oauth-config\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.127646 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce22043-f6b4-4294-8522-339a87e7b68a-console-serving-cert\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.127675 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-trusted-ca-bundle\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.127710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-console-config\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.229140 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-console-config\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.229488 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-oauth-serving-cert\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.229521 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j62gt\" (UniqueName: \"kubernetes.io/projected/5ce22043-f6b4-4294-8522-339a87e7b68a-kube-api-access-j62gt\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.229538 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-service-ca\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.229561 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ce22043-f6b4-4294-8522-339a87e7b68a-console-oauth-config\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.229580 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce22043-f6b4-4294-8522-339a87e7b68a-console-serving-cert\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.229614 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-trusted-ca-bundle\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.230019 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-console-config\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.230571 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-oauth-serving-cert\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.230613 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-trusted-ca-bundle\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.230619 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce22043-f6b4-4294-8522-339a87e7b68a-service-ca\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.233847 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce22043-f6b4-4294-8522-339a87e7b68a-console-serving-cert\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.236205 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ce22043-f6b4-4294-8522-339a87e7b68a-console-oauth-config\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.245687 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j62gt\" (UniqueName: \"kubernetes.io/projected/5ce22043-f6b4-4294-8522-339a87e7b68a-kube-api-access-j62gt\") pod \"console-5977cc965f-tjfns\" (UID: \"5ce22043-f6b4-4294-8522-339a87e7b68a\") " pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.426636 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.517237 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l"] Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.533611 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8800ae15-51ee-4310-889d-3608008986bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.541337 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8800ae15-51ee-4310-889d-3608008986bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mwzz2\" (UID: \"8800ae15-51ee-4310-889d-3608008986bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.576210 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-2z4jr"] Jan 30 05:19:26 crc kubenswrapper[4931]: W0130 05:19:26.591160 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01e6ed8f_a69f_4e32_b275_6ea9a5cebf1c.slice/crio-530ad43d810bf2ae458c6609c9427c28be64ec1ae5d6cceba20ec0377c60d263 WatchSource:0}: Error finding container 530ad43d810bf2ae458c6609c9427c28be64ec1ae5d6cceba20ec0377c60d263: Status 404 returned error can't find the container with id 530ad43d810bf2ae458c6609c9427c28be64ec1ae5d6cceba20ec0377c60d263 Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.694952 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" event={"ID":"a115b68a-a9ad-44db-90f5-1f016556956a","Type":"ContainerStarted","Data":"27ff35d51ed7fe223c212d077ed66e09d42ee05f18c01ce45798ed92a9c58657"} Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.695687 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" event={"ID":"01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c","Type":"ContainerStarted","Data":"530ad43d810bf2ae458c6609c9427c28be64ec1ae5d6cceba20ec0377c60d263"} Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.696617 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6mhzq" event={"ID":"66e77bed-ca3a-4cfe-874c-d6874c52ab0e","Type":"ContainerStarted","Data":"e84ba3b0367eb90cfc282570737824f48e8d330dc37c08681d53d96c66655473"} Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.702685 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5977cc965f-tjfns"] Jan 30 05:19:26 crc kubenswrapper[4931]: W0130 05:19:26.711343 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ce22043_f6b4_4294_8522_339a87e7b68a.slice/crio-78dcd5682a9e6c3d5454103207ad8a6fdc5ee7ad29bcf0d273d0c702d1e16c3d WatchSource:0}: Error finding container 78dcd5682a9e6c3d5454103207ad8a6fdc5ee7ad29bcf0d273d0c702d1e16c3d: Status 404 returned error can't find the container with id 78dcd5682a9e6c3d5454103207ad8a6fdc5ee7ad29bcf0d273d0c702d1e16c3d Jan 30 05:19:26 crc kubenswrapper[4931]: I0130 05:19:26.785650 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" Jan 30 05:19:27 crc kubenswrapper[4931]: I0130 05:19:27.083759 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2"] Jan 30 05:19:27 crc kubenswrapper[4931]: I0130 05:19:27.363293 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:19:27 crc kubenswrapper[4931]: I0130 05:19:27.363377 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:19:27 crc kubenswrapper[4931]: I0130 05:19:27.704353 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5977cc965f-tjfns" event={"ID":"5ce22043-f6b4-4294-8522-339a87e7b68a","Type":"ContainerStarted","Data":"313f21e9c9f42aa6e433cbbfe8c7f75d13dffb7ee072c68df8f5f2720218bc4f"} Jan 30 05:19:27 crc kubenswrapper[4931]: I0130 05:19:27.704441 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5977cc965f-tjfns" event={"ID":"5ce22043-f6b4-4294-8522-339a87e7b68a","Type":"ContainerStarted","Data":"78dcd5682a9e6c3d5454103207ad8a6fdc5ee7ad29bcf0d273d0c702d1e16c3d"} Jan 30 05:19:27 crc kubenswrapper[4931]: I0130 05:19:27.706257 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" event={"ID":"8800ae15-51ee-4310-889d-3608008986bd","Type":"ContainerStarted","Data":"27994bc45d61429b45f71fd70a54a4edd07c28f0cf60088b3894cadc11473b47"} Jan 30 05:19:27 crc kubenswrapper[4931]: I0130 05:19:27.724268 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5977cc965f-tjfns" podStartSLOduration=1.72424194 podStartE2EDuration="1.72424194s" podCreationTimestamp="2026-01-30 05:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:19:27.719836652 +0000 UTC m=+703.089746909" watchObservedRunningTime="2026-01-30 05:19:27.72424194 +0000 UTC m=+703.094152227" Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.727938 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6mhzq" event={"ID":"66e77bed-ca3a-4cfe-874c-d6874c52ab0e","Type":"ContainerStarted","Data":"7630dbb4ff8d4f848de55d2dd84579beace9374a92232dae6ae7e8cb77b2d1f0"} Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.728616 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.730666 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" event={"ID":"8800ae15-51ee-4310-889d-3608008986bd","Type":"ContainerStarted","Data":"3a865d76a867ef3b8859eff6e42dafb59f7f43dc44b5a90b29caf1523c57ec76"} Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.733917 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" event={"ID":"a115b68a-a9ad-44db-90f5-1f016556956a","Type":"ContainerStarted","Data":"237f5a71a2f9711fa60cd985670de36aa38ade9a068890f966034467bae49f4c"} Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.734633 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.736247 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" event={"ID":"01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c","Type":"ContainerStarted","Data":"3bbb20decbdc6b6c78b7947a9c77966b342061e41ee8a4b9a0616c1b9f99ba19"} Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.753747 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-6mhzq" podStartSLOduration=2.352185375 podStartE2EDuration="4.753731503s" podCreationTimestamp="2026-01-30 05:19:25 +0000 UTC" firstStartedPulling="2026-01-30 05:19:26.136596885 +0000 UTC m=+701.506507142" lastFinishedPulling="2026-01-30 05:19:28.538143013 +0000 UTC m=+703.908053270" observedRunningTime="2026-01-30 05:19:29.751996292 +0000 UTC m=+705.121906589" watchObservedRunningTime="2026-01-30 05:19:29.753731503 +0000 UTC m=+705.123641760" Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.778513 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" podStartSLOduration=2.736762852 podStartE2EDuration="4.77849036s" podCreationTimestamp="2026-01-30 05:19:25 +0000 UTC" firstStartedPulling="2026-01-30 05:19:26.52624159 +0000 UTC m=+701.896151877" lastFinishedPulling="2026-01-30 05:19:28.567969128 +0000 UTC m=+703.937879385" observedRunningTime="2026-01-30 05:19:29.767196193 +0000 UTC m=+705.137106490" watchObservedRunningTime="2026-01-30 05:19:29.77849036 +0000 UTC m=+705.148400637" Jan 30 05:19:29 crc kubenswrapper[4931]: I0130 05:19:29.844863 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mwzz2" podStartSLOduration=2.393908874 podStartE2EDuration="4.844834124s" podCreationTimestamp="2026-01-30 05:19:25 +0000 UTC" firstStartedPulling="2026-01-30 05:19:27.096698378 +0000 UTC m=+702.466608635" lastFinishedPulling="2026-01-30 05:19:29.547623608 +0000 UTC m=+704.917533885" observedRunningTime="2026-01-30 05:19:29.843535086 +0000 UTC m=+705.213445363" watchObservedRunningTime="2026-01-30 05:19:29.844834124 +0000 UTC m=+705.214744421" Jan 30 05:19:31 crc kubenswrapper[4931]: I0130 05:19:31.754982 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" event={"ID":"01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c","Type":"ContainerStarted","Data":"56c2626c9f36c80fd038f53f467d581b5d6e1fbcd527b45985cc7cdb7ece24e3"} Jan 30 05:19:31 crc kubenswrapper[4931]: I0130 05:19:31.785076 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-2z4jr" podStartSLOduration=2.545185019 podStartE2EDuration="6.78504617s" podCreationTimestamp="2026-01-30 05:19:25 +0000 UTC" firstStartedPulling="2026-01-30 05:19:26.597410503 +0000 UTC m=+701.967320800" lastFinishedPulling="2026-01-30 05:19:30.837271684 +0000 UTC m=+706.207181951" observedRunningTime="2026-01-30 05:19:31.781755754 +0000 UTC m=+707.151666041" watchObservedRunningTime="2026-01-30 05:19:31.78504617 +0000 UTC m=+707.154956467" Jan 30 05:19:36 crc kubenswrapper[4931]: I0130 05:19:36.140564 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-6mhzq" Jan 30 05:19:36 crc kubenswrapper[4931]: I0130 05:19:36.428172 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:36 crc kubenswrapper[4931]: I0130 05:19:36.429353 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:36 crc kubenswrapper[4931]: I0130 05:19:36.435521 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:36 crc kubenswrapper[4931]: I0130 05:19:36.794337 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5977cc965f-tjfns" Jan 30 05:19:36 crc kubenswrapper[4931]: I0130 05:19:36.871527 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ff4lr"] Jan 30 05:19:46 crc kubenswrapper[4931]: I0130 05:19:46.054453 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-krf2l" Jan 30 05:19:57 crc kubenswrapper[4931]: I0130 05:19:57.363729 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:19:57 crc kubenswrapper[4931]: I0130 05:19:57.364463 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:20:01 crc kubenswrapper[4931]: I0130 05:20:01.923875 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-ff4lr" podUID="cf0e8eba-09e8-4d9c-87de-9c57583e7276" containerName="console" containerID="cri-o://0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469" gracePeriod=15 Jan 30 05:20:01 crc kubenswrapper[4931]: I0130 05:20:01.949514 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6"] Jan 30 05:20:01 crc kubenswrapper[4931]: I0130 05:20:01.950740 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:01 crc kubenswrapper[4931]: I0130 05:20:01.955142 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 05:20:01 crc kubenswrapper[4931]: I0130 05:20:01.980883 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6"] Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.144300 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw68b\" (UniqueName: \"kubernetes.io/projected/150d0383-4876-424e-b189-6ce3cceccb72-kube-api-access-hw68b\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.144822 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.145001 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.246756 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.246851 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.247283 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.247323 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.247378 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw68b\" (UniqueName: \"kubernetes.io/projected/150d0383-4876-424e-b189-6ce3cceccb72-kube-api-access-hw68b\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.253537 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ff4lr_cf0e8eba-09e8-4d9c-87de-9c57583e7276/console/0.log" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.253684 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.287463 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw68b\" (UniqueName: \"kubernetes.io/projected/150d0383-4876-424e-b189-6ce3cceccb72-kube-api-access-hw68b\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.347805 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-oauth-serving-cert\") pod \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.348186 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-serving-cert\") pod \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.348705 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "cf0e8eba-09e8-4d9c-87de-9c57583e7276" (UID: "cf0e8eba-09e8-4d9c-87de-9c57583e7276"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.349098 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-oauth-config\") pod \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.349177 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-trusted-ca-bundle\") pod \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.349247 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-config\") pod \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.349276 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-service-ca\") pod \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.349315 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmxjn\" (UniqueName: \"kubernetes.io/projected/cf0e8eba-09e8-4d9c-87de-9c57583e7276-kube-api-access-cmxjn\") pod \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\" (UID: \"cf0e8eba-09e8-4d9c-87de-9c57583e7276\") " Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.349842 4931 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.349949 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-config" (OuterVolumeSpecName: "console-config") pod "cf0e8eba-09e8-4d9c-87de-9c57583e7276" (UID: "cf0e8eba-09e8-4d9c-87de-9c57583e7276"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.350012 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cf0e8eba-09e8-4d9c-87de-9c57583e7276" (UID: "cf0e8eba-09e8-4d9c-87de-9c57583e7276"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.350679 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-service-ca" (OuterVolumeSpecName: "service-ca") pod "cf0e8eba-09e8-4d9c-87de-9c57583e7276" (UID: "cf0e8eba-09e8-4d9c-87de-9c57583e7276"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.354273 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "cf0e8eba-09e8-4d9c-87de-9c57583e7276" (UID: "cf0e8eba-09e8-4d9c-87de-9c57583e7276"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.354760 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0e8eba-09e8-4d9c-87de-9c57583e7276-kube-api-access-cmxjn" (OuterVolumeSpecName: "kube-api-access-cmxjn") pod "cf0e8eba-09e8-4d9c-87de-9c57583e7276" (UID: "cf0e8eba-09e8-4d9c-87de-9c57583e7276"). InnerVolumeSpecName "kube-api-access-cmxjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.354769 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "cf0e8eba-09e8-4d9c-87de-9c57583e7276" (UID: "cf0e8eba-09e8-4d9c-87de-9c57583e7276"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.452863 4931 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.452918 4931 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.452937 4931 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.452954 4931 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.452971 4931 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cf0e8eba-09e8-4d9c-87de-9c57583e7276-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.453023 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmxjn\" (UniqueName: \"kubernetes.io/projected/cf0e8eba-09e8-4d9c-87de-9c57583e7276-kube-api-access-cmxjn\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.569866 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:02 crc kubenswrapper[4931]: I0130 05:20:02.817861 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6"] Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.020660 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-ff4lr_cf0e8eba-09e8-4d9c-87de-9c57583e7276/console/0.log" Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.020747 4931 generic.go:334] "Generic (PLEG): container finished" podID="cf0e8eba-09e8-4d9c-87de-9c57583e7276" containerID="0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469" exitCode=2 Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.020848 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ff4lr" Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.020850 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ff4lr" event={"ID":"cf0e8eba-09e8-4d9c-87de-9c57583e7276","Type":"ContainerDied","Data":"0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469"} Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.021098 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ff4lr" event={"ID":"cf0e8eba-09e8-4d9c-87de-9c57583e7276","Type":"ContainerDied","Data":"6ef4e3652e767b58bcd714efc40fa7c13d1316dc132366e3239b8378ad811289"} Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.021147 4931 scope.go:117] "RemoveContainer" containerID="0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469" Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.025061 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" event={"ID":"150d0383-4876-424e-b189-6ce3cceccb72","Type":"ContainerStarted","Data":"b8ea0d39de1aa49f8377b1ecb36b19511758a7d0acc80c3cd8e472a335dae098"} Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.025120 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" event={"ID":"150d0383-4876-424e-b189-6ce3cceccb72","Type":"ContainerStarted","Data":"21ab14cb6f82da964f1a6c0f8bdda169d8c7afca5fe9ac4060f3a5f594f49867"} Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.106740 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-ff4lr"] Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.107745 4931 scope.go:117] "RemoveContainer" containerID="0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469" Jan 30 05:20:03 crc kubenswrapper[4931]: E0130 05:20:03.108752 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469\": container with ID starting with 0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469 not found: ID does not exist" containerID="0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469" Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.108817 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469"} err="failed to get container status \"0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469\": rpc error: code = NotFound desc = could not find container \"0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469\": container with ID starting with 0fe79afe64820780e0fe369f0129eb24a6f58368212600b876f958b976ab3469 not found: ID does not exist" Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.112557 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-ff4lr"] Jan 30 05:20:03 crc kubenswrapper[4931]: I0130 05:20:03.437189 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0e8eba-09e8-4d9c-87de-9c57583e7276" path="/var/lib/kubelet/pods/cf0e8eba-09e8-4d9c-87de-9c57583e7276/volumes" Jan 30 05:20:04 crc kubenswrapper[4931]: I0130 05:20:04.035519 4931 generic.go:334] "Generic (PLEG): container finished" podID="150d0383-4876-424e-b189-6ce3cceccb72" containerID="b8ea0d39de1aa49f8377b1ecb36b19511758a7d0acc80c3cd8e472a335dae098" exitCode=0 Jan 30 05:20:04 crc kubenswrapper[4931]: I0130 05:20:04.035571 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" event={"ID":"150d0383-4876-424e-b189-6ce3cceccb72","Type":"ContainerDied","Data":"b8ea0d39de1aa49f8377b1ecb36b19511758a7d0acc80c3cd8e472a335dae098"} Jan 30 05:20:06 crc kubenswrapper[4931]: I0130 05:20:06.057329 4931 generic.go:334] "Generic (PLEG): container finished" podID="150d0383-4876-424e-b189-6ce3cceccb72" containerID="43de2be3b10e7b3f73c9e9d4410f86d05869ae3591a6ffa147ddbbc37f0a585d" exitCode=0 Jan 30 05:20:06 crc kubenswrapper[4931]: I0130 05:20:06.057493 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" event={"ID":"150d0383-4876-424e-b189-6ce3cceccb72","Type":"ContainerDied","Data":"43de2be3b10e7b3f73c9e9d4410f86d05869ae3591a6ffa147ddbbc37f0a585d"} Jan 30 05:20:07 crc kubenswrapper[4931]: I0130 05:20:07.069629 4931 generic.go:334] "Generic (PLEG): container finished" podID="150d0383-4876-424e-b189-6ce3cceccb72" containerID="df26810aae5b9065c104376b590b00682c6e6d7f7635ff7f535375e2d768201f" exitCode=0 Jan 30 05:20:07 crc kubenswrapper[4931]: I0130 05:20:07.069698 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" event={"ID":"150d0383-4876-424e-b189-6ce3cceccb72","Type":"ContainerDied","Data":"df26810aae5b9065c104376b590b00682c6e6d7f7635ff7f535375e2d768201f"} Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.380326 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.543409 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw68b\" (UniqueName: \"kubernetes.io/projected/150d0383-4876-424e-b189-6ce3cceccb72-kube-api-access-hw68b\") pod \"150d0383-4876-424e-b189-6ce3cceccb72\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.543574 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-bundle\") pod \"150d0383-4876-424e-b189-6ce3cceccb72\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.543631 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-util\") pod \"150d0383-4876-424e-b189-6ce3cceccb72\" (UID: \"150d0383-4876-424e-b189-6ce3cceccb72\") " Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.544505 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-bundle" (OuterVolumeSpecName: "bundle") pod "150d0383-4876-424e-b189-6ce3cceccb72" (UID: "150d0383-4876-424e-b189-6ce3cceccb72"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.548541 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150d0383-4876-424e-b189-6ce3cceccb72-kube-api-access-hw68b" (OuterVolumeSpecName: "kube-api-access-hw68b") pod "150d0383-4876-424e-b189-6ce3cceccb72" (UID: "150d0383-4876-424e-b189-6ce3cceccb72"). InnerVolumeSpecName "kube-api-access-hw68b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.583199 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-util" (OuterVolumeSpecName: "util") pod "150d0383-4876-424e-b189-6ce3cceccb72" (UID: "150d0383-4876-424e-b189-6ce3cceccb72"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.644842 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.644889 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/150d0383-4876-424e-b189-6ce3cceccb72-util\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:08 crc kubenswrapper[4931]: I0130 05:20:08.644908 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw68b\" (UniqueName: \"kubernetes.io/projected/150d0383-4876-424e-b189-6ce3cceccb72-kube-api-access-hw68b\") on node \"crc\" DevicePath \"\"" Jan 30 05:20:09 crc kubenswrapper[4931]: I0130 05:20:09.087976 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" event={"ID":"150d0383-4876-424e-b189-6ce3cceccb72","Type":"ContainerDied","Data":"21ab14cb6f82da964f1a6c0f8bdda169d8c7afca5fe9ac4060f3a5f594f49867"} Jan 30 05:20:09 crc kubenswrapper[4931]: I0130 05:20:09.088036 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21ab14cb6f82da964f1a6c0f8bdda169d8c7afca5fe9ac4060f3a5f594f49867" Jan 30 05:20:09 crc kubenswrapper[4931]: I0130 05:20:09.088072 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6" Jan 30 05:20:19 crc kubenswrapper[4931]: I0130 05:20:19.440375 4931 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.095670 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg"] Jan 30 05:20:20 crc kubenswrapper[4931]: E0130 05:20:20.095863 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150d0383-4876-424e-b189-6ce3cceccb72" containerName="extract" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.095875 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="150d0383-4876-424e-b189-6ce3cceccb72" containerName="extract" Jan 30 05:20:20 crc kubenswrapper[4931]: E0130 05:20:20.095894 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150d0383-4876-424e-b189-6ce3cceccb72" containerName="pull" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.095900 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="150d0383-4876-424e-b189-6ce3cceccb72" containerName="pull" Jan 30 05:20:20 crc kubenswrapper[4931]: E0130 05:20:20.095911 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150d0383-4876-424e-b189-6ce3cceccb72" containerName="util" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.095916 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="150d0383-4876-424e-b189-6ce3cceccb72" containerName="util" Jan 30 05:20:20 crc kubenswrapper[4931]: E0130 05:20:20.095926 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e8eba-09e8-4d9c-87de-9c57583e7276" containerName="console" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.095932 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e8eba-09e8-4d9c-87de-9c57583e7276" containerName="console" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.096018 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="150d0383-4876-424e-b189-6ce3cceccb72" containerName="extract" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.096033 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e8eba-09e8-4d9c-87de-9c57583e7276" containerName="console" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.096403 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.104246 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.104560 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.104523 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.104658 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-f5n8j" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.104781 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.161524 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg"] Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.296878 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-webhook-cert\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.296922 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ckk\" (UniqueName: \"kubernetes.io/projected/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-kube-api-access-n8ckk\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.296974 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-apiservice-cert\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.351352 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf"] Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.352177 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.354821 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.354934 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.355224 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bqx46" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.388912 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf"] Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.398160 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47321851-ef2d-47a3-949a-58f2e87df8dd-webhook-cert\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.398249 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-apiservice-cert\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.398314 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqjzn\" (UniqueName: \"kubernetes.io/projected/47321851-ef2d-47a3-949a-58f2e87df8dd-kube-api-access-rqjzn\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.398362 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-webhook-cert\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.398393 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ckk\" (UniqueName: \"kubernetes.io/projected/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-kube-api-access-n8ckk\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.398445 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47321851-ef2d-47a3-949a-58f2e87df8dd-apiservice-cert\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.405592 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-webhook-cert\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.407101 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-apiservice-cert\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.454042 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ckk\" (UniqueName: \"kubernetes.io/projected/164111f5-1bd4-4fc2-84f5-7418ee6e7e62-kube-api-access-n8ckk\") pod \"metallb-operator-controller-manager-6969d469fc-rzjqg\" (UID: \"164111f5-1bd4-4fc2-84f5-7418ee6e7e62\") " pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.500990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47321851-ef2d-47a3-949a-58f2e87df8dd-apiservice-cert\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.501062 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47321851-ef2d-47a3-949a-58f2e87df8dd-webhook-cert\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.501111 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqjzn\" (UniqueName: \"kubernetes.io/projected/47321851-ef2d-47a3-949a-58f2e87df8dd-kube-api-access-rqjzn\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.509982 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/47321851-ef2d-47a3-949a-58f2e87df8dd-apiservice-cert\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.522490 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47321851-ef2d-47a3-949a-58f2e87df8dd-webhook-cert\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.532230 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqjzn\" (UniqueName: \"kubernetes.io/projected/47321851-ef2d-47a3-949a-58f2e87df8dd-kube-api-access-rqjzn\") pod \"metallb-operator-webhook-server-7659bb7b4d-ssrqf\" (UID: \"47321851-ef2d-47a3-949a-58f2e87df8dd\") " pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.675251 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.711251 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.880177 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf"] Jan 30 05:20:20 crc kubenswrapper[4931]: W0130 05:20:20.896210 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47321851_ef2d_47a3_949a_58f2e87df8dd.slice/crio-3ffa202fc561fca0cfedec3a1fd08b9c9ea54f4c21dffcfd3ad688c30952b6ed WatchSource:0}: Error finding container 3ffa202fc561fca0cfedec3a1fd08b9c9ea54f4c21dffcfd3ad688c30952b6ed: Status 404 returned error can't find the container with id 3ffa202fc561fca0cfedec3a1fd08b9c9ea54f4c21dffcfd3ad688c30952b6ed Jan 30 05:20:20 crc kubenswrapper[4931]: I0130 05:20:20.931210 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg"] Jan 30 05:20:20 crc kubenswrapper[4931]: W0130 05:20:20.935541 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod164111f5_1bd4_4fc2_84f5_7418ee6e7e62.slice/crio-62739bcd7c9196a59d8e45e81b5b4a936544a9cfd2d710b9b8d6024a71442c07 WatchSource:0}: Error finding container 62739bcd7c9196a59d8e45e81b5b4a936544a9cfd2d710b9b8d6024a71442c07: Status 404 returned error can't find the container with id 62739bcd7c9196a59d8e45e81b5b4a936544a9cfd2d710b9b8d6024a71442c07 Jan 30 05:20:21 crc kubenswrapper[4931]: I0130 05:20:21.175359 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" event={"ID":"164111f5-1bd4-4fc2-84f5-7418ee6e7e62","Type":"ContainerStarted","Data":"62739bcd7c9196a59d8e45e81b5b4a936544a9cfd2d710b9b8d6024a71442c07"} Jan 30 05:20:21 crc kubenswrapper[4931]: I0130 05:20:21.176944 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" event={"ID":"47321851-ef2d-47a3-949a-58f2e87df8dd","Type":"ContainerStarted","Data":"3ffa202fc561fca0cfedec3a1fd08b9c9ea54f4c21dffcfd3ad688c30952b6ed"} Jan 30 05:20:25 crc kubenswrapper[4931]: I0130 05:20:25.203539 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" event={"ID":"164111f5-1bd4-4fc2-84f5-7418ee6e7e62","Type":"ContainerStarted","Data":"b7c1b4383a10a13ffb4a3e2d895cf39fa28755c38c12fc5357eda97029d6852e"} Jan 30 05:20:25 crc kubenswrapper[4931]: I0130 05:20:25.204147 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:20:25 crc kubenswrapper[4931]: I0130 05:20:25.226768 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" podStartSLOduration=2.007064976 podStartE2EDuration="5.226753892s" podCreationTimestamp="2026-01-30 05:20:20 +0000 UTC" firstStartedPulling="2026-01-30 05:20:20.938157926 +0000 UTC m=+756.308068193" lastFinishedPulling="2026-01-30 05:20:24.157846852 +0000 UTC m=+759.527757109" observedRunningTime="2026-01-30 05:20:25.22636151 +0000 UTC m=+760.596271767" watchObservedRunningTime="2026-01-30 05:20:25.226753892 +0000 UTC m=+760.596664149" Jan 30 05:20:27 crc kubenswrapper[4931]: I0130 05:20:27.363754 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:20:27 crc kubenswrapper[4931]: I0130 05:20:27.364122 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:20:27 crc kubenswrapper[4931]: I0130 05:20:27.364199 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:20:27 crc kubenswrapper[4931]: I0130 05:20:27.365256 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1794ca6ffdd404c39dffe9fa048526a3a78869de00d876a52d3bd280c8bbc2a2"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:20:27 crc kubenswrapper[4931]: I0130 05:20:27.365389 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://1794ca6ffdd404c39dffe9fa048526a3a78869de00d876a52d3bd280c8bbc2a2" gracePeriod=600 Jan 30 05:20:28 crc kubenswrapper[4931]: I0130 05:20:28.225716 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="1794ca6ffdd404c39dffe9fa048526a3a78869de00d876a52d3bd280c8bbc2a2" exitCode=0 Jan 30 05:20:28 crc kubenswrapper[4931]: I0130 05:20:28.225810 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"1794ca6ffdd404c39dffe9fa048526a3a78869de00d876a52d3bd280c8bbc2a2"} Jan 30 05:20:28 crc kubenswrapper[4931]: I0130 05:20:28.226491 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"60aa2b4543ca7f8bb21bef3c167f0da099829d6ddc544f185e09f8c4de74ad75"} Jan 30 05:20:28 crc kubenswrapper[4931]: I0130 05:20:28.226541 4931 scope.go:117] "RemoveContainer" containerID="ca23316a7a7a0870cd6ce778a3ddf7b3692d29f58078872d0288efcbee40c2e8" Jan 30 05:20:28 crc kubenswrapper[4931]: I0130 05:20:28.228864 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" event={"ID":"47321851-ef2d-47a3-949a-58f2e87df8dd","Type":"ContainerStarted","Data":"159bf76c0061e6eddc60e9893eaf6b70e748433a758ebf68372dfd7d9f90924b"} Jan 30 05:20:28 crc kubenswrapper[4931]: I0130 05:20:28.229071 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:20:28 crc kubenswrapper[4931]: I0130 05:20:28.280701 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" podStartSLOduration=1.965692415 podStartE2EDuration="8.280680766s" podCreationTimestamp="2026-01-30 05:20:20 +0000 UTC" firstStartedPulling="2026-01-30 05:20:20.901377705 +0000 UTC m=+756.271287972" lastFinishedPulling="2026-01-30 05:20:27.216366066 +0000 UTC m=+762.586276323" observedRunningTime="2026-01-30 05:20:28.269532561 +0000 UTC m=+763.639442848" watchObservedRunningTime="2026-01-30 05:20:28.280680766 +0000 UTC m=+763.650591043" Jan 30 05:20:40 crc kubenswrapper[4931]: I0130 05:20:40.684898 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7659bb7b4d-ssrqf" Jan 30 05:21:00 crc kubenswrapper[4931]: I0130 05:21:00.714956 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6969d469fc-rzjqg" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.491215 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz"] Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.492451 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.494667 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.494905 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-4ztgc" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.498446 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-768qr"] Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.501205 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.502703 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.504003 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.504471 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz"] Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578510 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-sockets\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578568 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-56ftz\" (UID: \"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578596 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-metrics\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578633 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pghnp\" (UniqueName: \"kubernetes.io/projected/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-kube-api-access-pghnp\") pod \"frr-k8s-webhook-server-7df86c4f6c-56ftz\" (UID: \"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578677 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be5b19a8-200f-462e-b8f2-fc956ec52080-metrics-certs\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578709 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-reloader\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578770 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-conf\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578790 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjpw\" (UniqueName: \"kubernetes.io/projected/be5b19a8-200f-462e-b8f2-fc956ec52080-kube-api-access-tjjpw\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.578827 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-startup\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.605210 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rcpl2"] Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.606103 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.610349 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tf9cf" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.610409 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.610461 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.610504 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.622627 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-g5mxs"] Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.623488 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.625615 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.649157 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-g5mxs"] Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679479 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-56ftz\" (UID: \"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679526 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-metrics\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679559 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pghnp\" (UniqueName: \"kubernetes.io/projected/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-kube-api-access-pghnp\") pod \"frr-k8s-webhook-server-7df86c4f6c-56ftz\" (UID: \"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679583 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be5b19a8-200f-462e-b8f2-fc956ec52080-metrics-certs\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679607 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-reloader\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679636 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-conf\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679652 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjjpw\" (UniqueName: \"kubernetes.io/projected/be5b19a8-200f-462e-b8f2-fc956ec52080-kube-api-access-tjjpw\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679672 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-startup\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.679698 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-sockets\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.680041 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-sockets\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.680222 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-reloader\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.680453 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-conf\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.680882 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/be5b19a8-200f-462e-b8f2-fc956ec52080-metrics\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: E0130 05:21:01.680904 4931 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 05:21:01 crc kubenswrapper[4931]: E0130 05:21:01.680987 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-cert podName:3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e nodeName:}" failed. No retries permitted until 2026-01-30 05:21:02.180965525 +0000 UTC m=+797.550875782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-cert") pod "frr-k8s-webhook-server-7df86c4f6c-56ftz" (UID: "3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e") : secret "frr-k8s-webhook-server-cert" not found Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.681347 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/be5b19a8-200f-462e-b8f2-fc956ec52080-frr-startup\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.693057 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be5b19a8-200f-462e-b8f2-fc956ec52080-metrics-certs\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.695803 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjjpw\" (UniqueName: \"kubernetes.io/projected/be5b19a8-200f-462e-b8f2-fc956ec52080-kube-api-access-tjjpw\") pod \"frr-k8s-768qr\" (UID: \"be5b19a8-200f-462e-b8f2-fc956ec52080\") " pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.696501 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pghnp\" (UniqueName: \"kubernetes.io/projected/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-kube-api-access-pghnp\") pod \"frr-k8s-webhook-server-7df86c4f6c-56ftz\" (UID: \"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.781324 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-metrics-certs\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.781387 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z8pp\" (UniqueName: \"kubernetes.io/projected/c9c06e8c-f207-490b-8bea-d6a742d63e72-kube-api-access-2z8pp\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.781434 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-cert\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.781513 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-metallb-excludel2\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.781621 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7xf\" (UniqueName: \"kubernetes.io/projected/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-kube-api-access-kh7xf\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.781681 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-metrics-certs\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.781791 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.882551 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.882799 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.882862 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-metrics-certs\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.882892 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8pp\" (UniqueName: \"kubernetes.io/projected/c9c06e8c-f207-490b-8bea-d6a742d63e72-kube-api-access-2z8pp\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.882914 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-cert\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.882941 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-metallb-excludel2\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.882976 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh7xf\" (UniqueName: \"kubernetes.io/projected/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-kube-api-access-kh7xf\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.883000 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-metrics-certs\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: E0130 05:21:01.883130 4931 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 30 05:21:01 crc kubenswrapper[4931]: E0130 05:21:01.883175 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-metrics-certs podName:c9c06e8c-f207-490b-8bea-d6a742d63e72 nodeName:}" failed. No retries permitted until 2026-01-30 05:21:02.383160933 +0000 UTC m=+797.753071180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-metrics-certs") pod "controller-6968d8fdc4-g5mxs" (UID: "c9c06e8c-f207-490b-8bea-d6a742d63e72") : secret "controller-certs-secret" not found Jan 30 05:21:01 crc kubenswrapper[4931]: E0130 05:21:01.883397 4931 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 05:21:01 crc kubenswrapper[4931]: E0130 05:21:01.883490 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist podName:f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18 nodeName:}" failed. No retries permitted until 2026-01-30 05:21:02.383480452 +0000 UTC m=+797.753390709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist") pod "speaker-rcpl2" (UID: "f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18") : secret "metallb-memberlist" not found Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.884114 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-metallb-excludel2\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.886938 4931 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.887410 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-metrics-certs\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.898024 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-cert\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.907139 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh7xf\" (UniqueName: \"kubernetes.io/projected/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-kube-api-access-kh7xf\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:01 crc kubenswrapper[4931]: I0130 05:21:01.908297 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8pp\" (UniqueName: \"kubernetes.io/projected/c9c06e8c-f207-490b-8bea-d6a742d63e72-kube-api-access-2z8pp\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.186413 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-56ftz\" (UID: \"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.190745 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-56ftz\" (UID: \"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.388637 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-metrics-certs\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.388754 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:02 crc kubenswrapper[4931]: E0130 05:21:02.388953 4931 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 05:21:02 crc kubenswrapper[4931]: E0130 05:21:02.389036 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist podName:f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18 nodeName:}" failed. No retries permitted until 2026-01-30 05:21:03.389013613 +0000 UTC m=+798.758923910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist") pod "speaker-rcpl2" (UID: "f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18") : secret "metallb-memberlist" not found Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.394983 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9c06e8c-f207-490b-8bea-d6a742d63e72-metrics-certs\") pod \"controller-6968d8fdc4-g5mxs\" (UID: \"c9c06e8c-f207-490b-8bea-d6a742d63e72\") " pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.461179 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.477547 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerStarted","Data":"7b50a49fc816ff5341dd997175c4c6b3b9d39b62e08397148b0ba2c206d3b30d"} Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.540980 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.786336 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz"] Jan 30 05:21:02 crc kubenswrapper[4931]: I0130 05:21:02.891199 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-g5mxs"] Jan 30 05:21:02 crc kubenswrapper[4931]: W0130 05:21:02.899755 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9c06e8c_f207_490b_8bea_d6a742d63e72.slice/crio-8c513c5b17f52d69a08cfd284411679c3c0adca1c9e5059f5ef107c77731997b WatchSource:0}: Error finding container 8c513c5b17f52d69a08cfd284411679c3c0adca1c9e5059f5ef107c77731997b: Status 404 returned error can't find the container with id 8c513c5b17f52d69a08cfd284411679c3c0adca1c9e5059f5ef107c77731997b Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.401283 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.409496 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18-memberlist\") pod \"speaker-rcpl2\" (UID: \"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18\") " pod="metallb-system/speaker-rcpl2" Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.425822 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rcpl2" Jan 30 05:21:03 crc kubenswrapper[4931]: W0130 05:21:03.451608 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9e7a8e0_b04b_4d38_a05c_a9baa66d2c18.slice/crio-36fbaf6857ed3b18394339b7a673af544f2ae62e2c1ff510538ec9764c90599d WatchSource:0}: Error finding container 36fbaf6857ed3b18394339b7a673af544f2ae62e2c1ff510538ec9764c90599d: Status 404 returned error can't find the container with id 36fbaf6857ed3b18394339b7a673af544f2ae62e2c1ff510538ec9764c90599d Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.484934 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-g5mxs" event={"ID":"c9c06e8c-f207-490b-8bea-d6a742d63e72","Type":"ContainerStarted","Data":"b99b6e018c11c527be0e94c3c6bfc05b635e157d728c7cc74bdbaad9fdefca78"} Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.485263 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-g5mxs" event={"ID":"c9c06e8c-f207-490b-8bea-d6a742d63e72","Type":"ContainerStarted","Data":"3d0ef34e375de9ece415a82c35363c06734bf76d9f9e31976e861ace41f4a2c1"} Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.485277 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-g5mxs" event={"ID":"c9c06e8c-f207-490b-8bea-d6a742d63e72","Type":"ContainerStarted","Data":"8c513c5b17f52d69a08cfd284411679c3c0adca1c9e5059f5ef107c77731997b"} Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.485586 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.487337 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rcpl2" event={"ID":"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18","Type":"ContainerStarted","Data":"36fbaf6857ed3b18394339b7a673af544f2ae62e2c1ff510538ec9764c90599d"} Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.488772 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" event={"ID":"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e","Type":"ContainerStarted","Data":"7446ca5d1144ddde508fbf779bbefc6130d717b9170d677b9287d5111c1e8d0a"} Jan 30 05:21:03 crc kubenswrapper[4931]: I0130 05:21:03.504010 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-g5mxs" podStartSLOduration=2.503991567 podStartE2EDuration="2.503991567s" podCreationTimestamp="2026-01-30 05:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:21:03.501146407 +0000 UTC m=+798.871056684" watchObservedRunningTime="2026-01-30 05:21:03.503991567 +0000 UTC m=+798.873901824" Jan 30 05:21:04 crc kubenswrapper[4931]: I0130 05:21:04.495586 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rcpl2" event={"ID":"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18","Type":"ContainerStarted","Data":"47e8533c2f9476d932560ca0e77c3e05a995ac8dad3ca7531ed736451c7d8bb1"} Jan 30 05:21:04 crc kubenswrapper[4931]: I0130 05:21:04.495648 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rcpl2" event={"ID":"f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18","Type":"ContainerStarted","Data":"7e32cf4f79c08d2dcce18e81afb226dda165b036f93414277290c7c605fed69a"} Jan 30 05:21:04 crc kubenswrapper[4931]: I0130 05:21:04.510058 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rcpl2" podStartSLOduration=3.510042287 podStartE2EDuration="3.510042287s" podCreationTimestamp="2026-01-30 05:21:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:21:04.508782622 +0000 UTC m=+799.878692879" watchObservedRunningTime="2026-01-30 05:21:04.510042287 +0000 UTC m=+799.879952544" Jan 30 05:21:05 crc kubenswrapper[4931]: I0130 05:21:05.500899 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rcpl2" Jan 30 05:21:10 crc kubenswrapper[4931]: I0130 05:21:10.548897 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" event={"ID":"3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e","Type":"ContainerStarted","Data":"4c5800e75db9818badd02d28f4f4cdd69afd271f0b4ebdfd87b9ad81321a331e"} Jan 30 05:21:10 crc kubenswrapper[4931]: I0130 05:21:10.549877 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:10 crc kubenswrapper[4931]: I0130 05:21:10.551276 4931 generic.go:334] "Generic (PLEG): container finished" podID="be5b19a8-200f-462e-b8f2-fc956ec52080" containerID="d091fdb203c9eebcff39560e61d24099acb2730684803a6aad64d3125fbc8900" exitCode=0 Jan 30 05:21:10 crc kubenswrapper[4931]: I0130 05:21:10.551302 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerDied","Data":"d091fdb203c9eebcff39560e61d24099acb2730684803a6aad64d3125fbc8900"} Jan 30 05:21:10 crc kubenswrapper[4931]: I0130 05:21:10.569159 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" podStartSLOduration=2.911992925 podStartE2EDuration="9.569143459s" podCreationTimestamp="2026-01-30 05:21:01 +0000 UTC" firstStartedPulling="2026-01-30 05:21:02.821581821 +0000 UTC m=+798.191492088" lastFinishedPulling="2026-01-30 05:21:09.478732325 +0000 UTC m=+804.848642622" observedRunningTime="2026-01-30 05:21:10.567744909 +0000 UTC m=+805.937655166" watchObservedRunningTime="2026-01-30 05:21:10.569143459 +0000 UTC m=+805.939053716" Jan 30 05:21:11 crc kubenswrapper[4931]: I0130 05:21:11.561841 4931 generic.go:334] "Generic (PLEG): container finished" podID="be5b19a8-200f-462e-b8f2-fc956ec52080" containerID="3614b404af2e15a8c92c762f8971d4adf0695ec4c2675a1f4ba165839d330054" exitCode=0 Jan 30 05:21:11 crc kubenswrapper[4931]: I0130 05:21:11.561892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerDied","Data":"3614b404af2e15a8c92c762f8971d4adf0695ec4c2675a1f4ba165839d330054"} Jan 30 05:21:12 crc kubenswrapper[4931]: I0130 05:21:12.575940 4931 generic.go:334] "Generic (PLEG): container finished" podID="be5b19a8-200f-462e-b8f2-fc956ec52080" containerID="832b72323076f1a2cdb42a6edc7ca402e3fa358a97ca8e86a49bc1733370824f" exitCode=0 Jan 30 05:21:12 crc kubenswrapper[4931]: I0130 05:21:12.576732 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerDied","Data":"832b72323076f1a2cdb42a6edc7ca402e3fa358a97ca8e86a49bc1733370824f"} Jan 30 05:21:13 crc kubenswrapper[4931]: I0130 05:21:13.430654 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rcpl2" Jan 30 05:21:13 crc kubenswrapper[4931]: I0130 05:21:13.594990 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerStarted","Data":"d345466d2acfb0fca0d70ee06649c0cf8629f7cfda0ea80fc958b94cba914897"} Jan 30 05:21:13 crc kubenswrapper[4931]: I0130 05:21:13.595056 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerStarted","Data":"689aab8a6c9e67bedae8f0fe7c6d3439fbd3d2a70509ca012f325407586e71a3"} Jan 30 05:21:13 crc kubenswrapper[4931]: I0130 05:21:13.595071 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerStarted","Data":"1d4ffcf3b8ce542596cc1a545cdcc8d9a6ba151afb2f6cd25406b2283f55b5a5"} Jan 30 05:21:13 crc kubenswrapper[4931]: I0130 05:21:13.595085 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerStarted","Data":"5ee6174343317c45a29a8db376af997e091e54bbbe262524d77e9604e18bef7c"} Jan 30 05:21:13 crc kubenswrapper[4931]: I0130 05:21:13.595109 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerStarted","Data":"5923ba459c547a350ff8b7ff30bcebed88229ac1c61a5c117fbb21f27cd8e21a"} Jan 30 05:21:14 crc kubenswrapper[4931]: I0130 05:21:14.607152 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-768qr" event={"ID":"be5b19a8-200f-462e-b8f2-fc956ec52080","Type":"ContainerStarted","Data":"0c5d99e64f4e55b25f067af817de7a33d4fdf66d6bf5189b387f92886daff63d"} Jan 30 05:21:14 crc kubenswrapper[4931]: I0130 05:21:14.608207 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:14 crc kubenswrapper[4931]: I0130 05:21:14.640633 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-768qr" podStartSLOduration=6.211616472 podStartE2EDuration="13.640615639s" podCreationTimestamp="2026-01-30 05:21:01 +0000 UTC" firstStartedPulling="2026-01-30 05:21:02.047634029 +0000 UTC m=+797.417544286" lastFinishedPulling="2026-01-30 05:21:09.476633186 +0000 UTC m=+804.846543453" observedRunningTime="2026-01-30 05:21:14.639550739 +0000 UTC m=+810.009461006" watchObservedRunningTime="2026-01-30 05:21:14.640615639 +0000 UTC m=+810.010525906" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.220154 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd"] Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.222559 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.225904 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.230184 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd"] Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.408086 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46nt2\" (UniqueName: \"kubernetes.io/projected/686d3bad-998e-4688-a556-c25a0770810a-kube-api-access-46nt2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.408237 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.408271 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.509121 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.509178 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.509232 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46nt2\" (UniqueName: \"kubernetes.io/projected/686d3bad-998e-4688-a556-c25a0770810a-kube-api-access-46nt2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.509698 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.509971 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.543202 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46nt2\" (UniqueName: \"kubernetes.io/projected/686d3bad-998e-4688-a556-c25a0770810a-kube-api-access-46nt2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:15 crc kubenswrapper[4931]: I0130 05:21:15.576280 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:16 crc kubenswrapper[4931]: I0130 05:21:16.061742 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd"] Jan 30 05:21:16 crc kubenswrapper[4931]: I0130 05:21:16.625980 4931 generic.go:334] "Generic (PLEG): container finished" podID="686d3bad-998e-4688-a556-c25a0770810a" containerID="dbeb18acfd8714226ddd9a2e3758eae93dbab1d2bafd3b44dbac6fcea4d2cc71" exitCode=0 Jan 30 05:21:16 crc kubenswrapper[4931]: I0130 05:21:16.626067 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" event={"ID":"686d3bad-998e-4688-a556-c25a0770810a","Type":"ContainerDied","Data":"dbeb18acfd8714226ddd9a2e3758eae93dbab1d2bafd3b44dbac6fcea4d2cc71"} Jan 30 05:21:16 crc kubenswrapper[4931]: I0130 05:21:16.626733 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" event={"ID":"686d3bad-998e-4688-a556-c25a0770810a","Type":"ContainerStarted","Data":"a8f056d371a59e738cc49b1654c31fbae3f10712ac02381d1d03ca264298db58"} Jan 30 05:21:16 crc kubenswrapper[4931]: I0130 05:21:16.882840 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:16 crc kubenswrapper[4931]: I0130 05:21:16.958909 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.769730 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzlbl"] Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.772020 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.778572 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzlbl"] Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.864764 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-utilities\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.864845 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-catalog-content\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.864881 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkkgz\" (UniqueName: \"kubernetes.io/projected/15d065d1-b45c-450a-9eec-aa929632433c-kube-api-access-jkkgz\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.965340 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkkgz\" (UniqueName: \"kubernetes.io/projected/15d065d1-b45c-450a-9eec-aa929632433c-kube-api-access-jkkgz\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.965471 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-utilities\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.965514 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-catalog-content\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.966068 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-catalog-content\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.966150 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-utilities\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:18 crc kubenswrapper[4931]: I0130 05:21:18.988578 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkkgz\" (UniqueName: \"kubernetes.io/projected/15d065d1-b45c-450a-9eec-aa929632433c-kube-api-access-jkkgz\") pod \"redhat-operators-dzlbl\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:19 crc kubenswrapper[4931]: I0130 05:21:19.149524 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:20 crc kubenswrapper[4931]: I0130 05:21:20.653746 4931 generic.go:334] "Generic (PLEG): container finished" podID="686d3bad-998e-4688-a556-c25a0770810a" containerID="8ad6359b697f32b69166cbb4abeaa98ee59ced3e0a39f5170c0a74d04deeeb04" exitCode=0 Jan 30 05:21:20 crc kubenswrapper[4931]: I0130 05:21:20.653878 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" event={"ID":"686d3bad-998e-4688-a556-c25a0770810a","Type":"ContainerDied","Data":"8ad6359b697f32b69166cbb4abeaa98ee59ced3e0a39f5170c0a74d04deeeb04"} Jan 30 05:21:20 crc kubenswrapper[4931]: I0130 05:21:20.714457 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzlbl"] Jan 30 05:21:20 crc kubenswrapper[4931]: W0130 05:21:20.724658 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15d065d1_b45c_450a_9eec_aa929632433c.slice/crio-2d693934f6449d82237d1c208f5b4b2834aad8a3fb2653ce87c489e64b971cbc WatchSource:0}: Error finding container 2d693934f6449d82237d1c208f5b4b2834aad8a3fb2653ce87c489e64b971cbc: Status 404 returned error can't find the container with id 2d693934f6449d82237d1c208f5b4b2834aad8a3fb2653ce87c489e64b971cbc Jan 30 05:21:21 crc kubenswrapper[4931]: I0130 05:21:21.664038 4931 generic.go:334] "Generic (PLEG): container finished" podID="15d065d1-b45c-450a-9eec-aa929632433c" containerID="84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486" exitCode=0 Jan 30 05:21:21 crc kubenswrapper[4931]: I0130 05:21:21.664093 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlbl" event={"ID":"15d065d1-b45c-450a-9eec-aa929632433c","Type":"ContainerDied","Data":"84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486"} Jan 30 05:21:21 crc kubenswrapper[4931]: I0130 05:21:21.664486 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlbl" event={"ID":"15d065d1-b45c-450a-9eec-aa929632433c","Type":"ContainerStarted","Data":"2d693934f6449d82237d1c208f5b4b2834aad8a3fb2653ce87c489e64b971cbc"} Jan 30 05:21:21 crc kubenswrapper[4931]: I0130 05:21:21.673416 4931 generic.go:334] "Generic (PLEG): container finished" podID="686d3bad-998e-4688-a556-c25a0770810a" containerID="38d6d2e1ea8c080852c1379421d5471393a5c209a8bd5c148fd5da9727525c88" exitCode=0 Jan 30 05:21:21 crc kubenswrapper[4931]: I0130 05:21:21.673495 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" event={"ID":"686d3bad-998e-4688-a556-c25a0770810a","Type":"ContainerDied","Data":"38d6d2e1ea8c080852c1379421d5471393a5c209a8bd5c148fd5da9727525c88"} Jan 30 05:21:22 crc kubenswrapper[4931]: I0130 05:21:22.470415 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-56ftz" Jan 30 05:21:22 crc kubenswrapper[4931]: I0130 05:21:22.546727 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-g5mxs" Jan 30 05:21:22 crc kubenswrapper[4931]: I0130 05:21:22.680482 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlbl" event={"ID":"15d065d1-b45c-450a-9eec-aa929632433c","Type":"ContainerStarted","Data":"1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269"} Jan 30 05:21:22 crc kubenswrapper[4931]: I0130 05:21:22.965970 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.018414 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-bundle\") pod \"686d3bad-998e-4688-a556-c25a0770810a\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.018494 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46nt2\" (UniqueName: \"kubernetes.io/projected/686d3bad-998e-4688-a556-c25a0770810a-kube-api-access-46nt2\") pod \"686d3bad-998e-4688-a556-c25a0770810a\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.018539 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-util\") pod \"686d3bad-998e-4688-a556-c25a0770810a\" (UID: \"686d3bad-998e-4688-a556-c25a0770810a\") " Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.019195 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-bundle" (OuterVolumeSpecName: "bundle") pod "686d3bad-998e-4688-a556-c25a0770810a" (UID: "686d3bad-998e-4688-a556-c25a0770810a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.025699 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686d3bad-998e-4688-a556-c25a0770810a-kube-api-access-46nt2" (OuterVolumeSpecName: "kube-api-access-46nt2") pod "686d3bad-998e-4688-a556-c25a0770810a" (UID: "686d3bad-998e-4688-a556-c25a0770810a"). InnerVolumeSpecName "kube-api-access-46nt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.031318 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-util" (OuterVolumeSpecName: "util") pod "686d3bad-998e-4688-a556-c25a0770810a" (UID: "686d3bad-998e-4688-a556-c25a0770810a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.119787 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.119816 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46nt2\" (UniqueName: \"kubernetes.io/projected/686d3bad-998e-4688-a556-c25a0770810a-kube-api-access-46nt2\") on node \"crc\" DevicePath \"\"" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.119826 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/686d3bad-998e-4688-a556-c25a0770810a-util\") on node \"crc\" DevicePath \"\"" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.693669 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.693690 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd" event={"ID":"686d3bad-998e-4688-a556-c25a0770810a","Type":"ContainerDied","Data":"a8f056d371a59e738cc49b1654c31fbae3f10712ac02381d1d03ca264298db58"} Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.693763 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f056d371a59e738cc49b1654c31fbae3f10712ac02381d1d03ca264298db58" Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.697040 4931 generic.go:334] "Generic (PLEG): container finished" podID="15d065d1-b45c-450a-9eec-aa929632433c" containerID="1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269" exitCode=0 Jan 30 05:21:23 crc kubenswrapper[4931]: I0130 05:21:23.697108 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlbl" event={"ID":"15d065d1-b45c-450a-9eec-aa929632433c","Type":"ContainerDied","Data":"1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269"} Jan 30 05:21:25 crc kubenswrapper[4931]: I0130 05:21:25.712279 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlbl" event={"ID":"15d065d1-b45c-450a-9eec-aa929632433c","Type":"ContainerStarted","Data":"ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2"} Jan 30 05:21:25 crc kubenswrapper[4931]: I0130 05:21:25.730002 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzlbl" podStartSLOduration=4.571202325 podStartE2EDuration="7.729988221s" podCreationTimestamp="2026-01-30 05:21:18 +0000 UTC" firstStartedPulling="2026-01-30 05:21:21.666889947 +0000 UTC m=+817.036800214" lastFinishedPulling="2026-01-30 05:21:24.825675823 +0000 UTC m=+820.195586110" observedRunningTime="2026-01-30 05:21:25.726413691 +0000 UTC m=+821.096323958" watchObservedRunningTime="2026-01-30 05:21:25.729988221 +0000 UTC m=+821.099898478" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.171987 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd"] Jan 30 05:21:28 crc kubenswrapper[4931]: E0130 05:21:28.172817 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686d3bad-998e-4688-a556-c25a0770810a" containerName="extract" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.172832 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="686d3bad-998e-4688-a556-c25a0770810a" containerName="extract" Jan 30 05:21:28 crc kubenswrapper[4931]: E0130 05:21:28.172854 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686d3bad-998e-4688-a556-c25a0770810a" containerName="util" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.172861 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="686d3bad-998e-4688-a556-c25a0770810a" containerName="util" Jan 30 05:21:28 crc kubenswrapper[4931]: E0130 05:21:28.172870 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686d3bad-998e-4688-a556-c25a0770810a" containerName="pull" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.172879 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="686d3bad-998e-4688-a556-c25a0770810a" containerName="pull" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.173025 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="686d3bad-998e-4688-a556-c25a0770810a" containerName="extract" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.173490 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.175048 4931 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-lg9z8" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.175331 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.175638 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.190574 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd"] Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.200307 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9b67ac51-9e69-4b48-a6b3-2252a8c635ae-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9bbzd\" (UID: \"9b67ac51-9e69-4b48-a6b3-2252a8c635ae\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.200370 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pgpr\" (UniqueName: \"kubernetes.io/projected/9b67ac51-9e69-4b48-a6b3-2252a8c635ae-kube-api-access-4pgpr\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9bbzd\" (UID: \"9b67ac51-9e69-4b48-a6b3-2252a8c635ae\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.302008 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9b67ac51-9e69-4b48-a6b3-2252a8c635ae-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9bbzd\" (UID: \"9b67ac51-9e69-4b48-a6b3-2252a8c635ae\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.302077 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pgpr\" (UniqueName: \"kubernetes.io/projected/9b67ac51-9e69-4b48-a6b3-2252a8c635ae-kube-api-access-4pgpr\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9bbzd\" (UID: \"9b67ac51-9e69-4b48-a6b3-2252a8c635ae\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.302496 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9b67ac51-9e69-4b48-a6b3-2252a8c635ae-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9bbzd\" (UID: \"9b67ac51-9e69-4b48-a6b3-2252a8c635ae\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.330991 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pgpr\" (UniqueName: \"kubernetes.io/projected/9b67ac51-9e69-4b48-a6b3-2252a8c635ae-kube-api-access-4pgpr\") pod \"cert-manager-operator-controller-manager-66c8bdd694-9bbzd\" (UID: \"9b67ac51-9e69-4b48-a6b3-2252a8c635ae\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.539570 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" Jan 30 05:21:28 crc kubenswrapper[4931]: I0130 05:21:28.952891 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd"] Jan 30 05:21:28 crc kubenswrapper[4931]: W0130 05:21:28.966452 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b67ac51_9e69_4b48_a6b3_2252a8c635ae.slice/crio-01273766cf6a03889bff8cd66278caf13630c4b081a9550ce6ddacfebab92d0e WatchSource:0}: Error finding container 01273766cf6a03889bff8cd66278caf13630c4b081a9550ce6ddacfebab92d0e: Status 404 returned error can't find the container with id 01273766cf6a03889bff8cd66278caf13630c4b081a9550ce6ddacfebab92d0e Jan 30 05:21:29 crc kubenswrapper[4931]: I0130 05:21:29.149909 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:29 crc kubenswrapper[4931]: I0130 05:21:29.150047 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:29 crc kubenswrapper[4931]: I0130 05:21:29.741086 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" event={"ID":"9b67ac51-9e69-4b48-a6b3-2252a8c635ae","Type":"ContainerStarted","Data":"01273766cf6a03889bff8cd66278caf13630c4b081a9550ce6ddacfebab92d0e"} Jan 30 05:21:30 crc kubenswrapper[4931]: I0130 05:21:30.229123 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzlbl" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="registry-server" probeResult="failure" output=< Jan 30 05:21:30 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 05:21:30 crc kubenswrapper[4931]: > Jan 30 05:21:31 crc kubenswrapper[4931]: I0130 05:21:31.897546 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-768qr" Jan 30 05:21:32 crc kubenswrapper[4931]: I0130 05:21:32.764035 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" event={"ID":"9b67ac51-9e69-4b48-a6b3-2252a8c635ae","Type":"ContainerStarted","Data":"4adc7f48a4b039047124a737e2ee98b55324b94ed05a4187b66008366f4ec0c9"} Jan 30 05:21:32 crc kubenswrapper[4931]: I0130 05:21:32.799151 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-9bbzd" podStartSLOduration=1.6231087039999998 podStartE2EDuration="4.799125965s" podCreationTimestamp="2026-01-30 05:21:28 +0000 UTC" firstStartedPulling="2026-01-30 05:21:28.969383535 +0000 UTC m=+824.339293802" lastFinishedPulling="2026-01-30 05:21:32.145400796 +0000 UTC m=+827.515311063" observedRunningTime="2026-01-30 05:21:32.794141565 +0000 UTC m=+828.164051852" watchObservedRunningTime="2026-01-30 05:21:32.799125965 +0000 UTC m=+828.169036262" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.367255 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hsrfm"] Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.368765 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.372207 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.372362 4931 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qlvrg" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.372417 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.388862 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hsrfm"] Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.503743 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5049b2a6-f85e-4250-9b12-c70705adaf35-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hsrfm\" (UID: \"5049b2a6-f85e-4250-9b12-c70705adaf35\") " pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.503888 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67w6m\" (UniqueName: \"kubernetes.io/projected/5049b2a6-f85e-4250-9b12-c70705adaf35-kube-api-access-67w6m\") pod \"cert-manager-webhook-6888856db4-hsrfm\" (UID: \"5049b2a6-f85e-4250-9b12-c70705adaf35\") " pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.605396 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67w6m\" (UniqueName: \"kubernetes.io/projected/5049b2a6-f85e-4250-9b12-c70705adaf35-kube-api-access-67w6m\") pod \"cert-manager-webhook-6888856db4-hsrfm\" (UID: \"5049b2a6-f85e-4250-9b12-c70705adaf35\") " pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.605666 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5049b2a6-f85e-4250-9b12-c70705adaf35-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hsrfm\" (UID: \"5049b2a6-f85e-4250-9b12-c70705adaf35\") " pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.640903 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5049b2a6-f85e-4250-9b12-c70705adaf35-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hsrfm\" (UID: \"5049b2a6-f85e-4250-9b12-c70705adaf35\") " pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.640968 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67w6m\" (UniqueName: \"kubernetes.io/projected/5049b2a6-f85e-4250-9b12-c70705adaf35-kube-api-access-67w6m\") pod \"cert-manager-webhook-6888856db4-hsrfm\" (UID: \"5049b2a6-f85e-4250-9b12-c70705adaf35\") " pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:36 crc kubenswrapper[4931]: I0130 05:21:36.698693 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:37 crc kubenswrapper[4931]: I0130 05:21:37.021997 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hsrfm"] Jan 30 05:21:37 crc kubenswrapper[4931]: I0130 05:21:37.805811 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" event={"ID":"5049b2a6-f85e-4250-9b12-c70705adaf35","Type":"ContainerStarted","Data":"d4364c06c83c8bc95959ce092c6feddcbb0bf46ffc65930c8af19d5e9b61ab34"} Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.207538 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.256016 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.451452 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzlbl"] Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.872851 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qvz8h"] Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.873582 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.876857 4931 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-p2d92" Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.889507 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qvz8h"] Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.952794 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39da06e0-e9ea-4570-b486-3c0d2fe79820-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qvz8h\" (UID: \"39da06e0-e9ea-4570-b486-3c0d2fe79820\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:39 crc kubenswrapper[4931]: I0130 05:21:39.952916 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfwg\" (UniqueName: \"kubernetes.io/projected/39da06e0-e9ea-4570-b486-3c0d2fe79820-kube-api-access-pbfwg\") pod \"cert-manager-cainjector-5545bd876-qvz8h\" (UID: \"39da06e0-e9ea-4570-b486-3c0d2fe79820\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:40 crc kubenswrapper[4931]: I0130 05:21:40.053988 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39da06e0-e9ea-4570-b486-3c0d2fe79820-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qvz8h\" (UID: \"39da06e0-e9ea-4570-b486-3c0d2fe79820\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:40 crc kubenswrapper[4931]: I0130 05:21:40.054071 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbfwg\" (UniqueName: \"kubernetes.io/projected/39da06e0-e9ea-4570-b486-3c0d2fe79820-kube-api-access-pbfwg\") pod \"cert-manager-cainjector-5545bd876-qvz8h\" (UID: \"39da06e0-e9ea-4570-b486-3c0d2fe79820\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:40 crc kubenswrapper[4931]: I0130 05:21:40.078001 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39da06e0-e9ea-4570-b486-3c0d2fe79820-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qvz8h\" (UID: \"39da06e0-e9ea-4570-b486-3c0d2fe79820\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:40 crc kubenswrapper[4931]: I0130 05:21:40.081628 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbfwg\" (UniqueName: \"kubernetes.io/projected/39da06e0-e9ea-4570-b486-3c0d2fe79820-kube-api-access-pbfwg\") pod \"cert-manager-cainjector-5545bd876-qvz8h\" (UID: \"39da06e0-e9ea-4570-b486-3c0d2fe79820\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:40 crc kubenswrapper[4931]: I0130 05:21:40.188766 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" Jan 30 05:21:40 crc kubenswrapper[4931]: I0130 05:21:40.823629 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dzlbl" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="registry-server" containerID="cri-o://ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2" gracePeriod=2 Jan 30 05:21:41 crc kubenswrapper[4931]: I0130 05:21:41.560030 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.540695 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-utilities\") pod \"15d065d1-b45c-450a-9eec-aa929632433c\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.540747 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkkgz\" (UniqueName: \"kubernetes.io/projected/15d065d1-b45c-450a-9eec-aa929632433c-kube-api-access-jkkgz\") pod \"15d065d1-b45c-450a-9eec-aa929632433c\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.540816 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-catalog-content\") pod \"15d065d1-b45c-450a-9eec-aa929632433c\" (UID: \"15d065d1-b45c-450a-9eec-aa929632433c\") " Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.542791 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-utilities" (OuterVolumeSpecName: "utilities") pod "15d065d1-b45c-450a-9eec-aa929632433c" (UID: "15d065d1-b45c-450a-9eec-aa929632433c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.564074 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d065d1-b45c-450a-9eec-aa929632433c-kube-api-access-jkkgz" (OuterVolumeSpecName: "kube-api-access-jkkgz") pod "15d065d1-b45c-450a-9eec-aa929632433c" (UID: "15d065d1-b45c-450a-9eec-aa929632433c"). InnerVolumeSpecName "kube-api-access-jkkgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.608862 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.644602 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.644695 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkkgz\" (UniqueName: \"kubernetes.io/projected/15d065d1-b45c-450a-9eec-aa929632433c-kube-api-access-jkkgz\") on node \"crc\" DevicePath \"\"" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.651188 4931 generic.go:334] "Generic (PLEG): container finished" podID="15d065d1-b45c-450a-9eec-aa929632433c" containerID="ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2" exitCode=0 Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.651241 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlbl" event={"ID":"15d065d1-b45c-450a-9eec-aa929632433c","Type":"ContainerDied","Data":"ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2"} Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.651267 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzlbl" event={"ID":"15d065d1-b45c-450a-9eec-aa929632433c","Type":"ContainerDied","Data":"2d693934f6449d82237d1c208f5b4b2834aad8a3fb2653ce87c489e64b971cbc"} Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.651321 4931 scope.go:117] "RemoveContainer" containerID="ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.651517 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzlbl" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.682934 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" podStartSLOduration=2.3317749770000002 podStartE2EDuration="6.682912574s" podCreationTimestamp="2026-01-30 05:21:36 +0000 UTC" firstStartedPulling="2026-01-30 05:21:37.026880701 +0000 UTC m=+832.396790958" lastFinishedPulling="2026-01-30 05:21:41.378018298 +0000 UTC m=+836.747928555" observedRunningTime="2026-01-30 05:21:42.665242627 +0000 UTC m=+838.035152884" watchObservedRunningTime="2026-01-30 05:21:42.682912574 +0000 UTC m=+838.052822831" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.702738 4931 scope.go:117] "RemoveContainer" containerID="1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269" Jan 30 05:21:42 crc kubenswrapper[4931]: W0130 05:21:42.722561 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39da06e0_e9ea_4570_b486_3c0d2fe79820.slice/crio-a1b1c4f3db08a3fca137107295c23f533b18590d886a51a91a92a351bf5f8b8a WatchSource:0}: Error finding container a1b1c4f3db08a3fca137107295c23f533b18590d886a51a91a92a351bf5f8b8a: Status 404 returned error can't find the container with id a1b1c4f3db08a3fca137107295c23f533b18590d886a51a91a92a351bf5f8b8a Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.748024 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qvz8h"] Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.759525 4931 scope.go:117] "RemoveContainer" containerID="84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.803634 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15d065d1-b45c-450a-9eec-aa929632433c" (UID: "15d065d1-b45c-450a-9eec-aa929632433c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.807327 4931 scope.go:117] "RemoveContainer" containerID="ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2" Jan 30 05:21:42 crc kubenswrapper[4931]: E0130 05:21:42.807729 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2\": container with ID starting with ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2 not found: ID does not exist" containerID="ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.807783 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2"} err="failed to get container status \"ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2\": rpc error: code = NotFound desc = could not find container \"ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2\": container with ID starting with ed1f535365a23ff00632912b9812f20b552c9fe41ab911db1619fcd42b6463c2 not found: ID does not exist" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.807805 4931 scope.go:117] "RemoveContainer" containerID="1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269" Jan 30 05:21:42 crc kubenswrapper[4931]: E0130 05:21:42.808878 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269\": container with ID starting with 1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269 not found: ID does not exist" containerID="1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.808902 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269"} err="failed to get container status \"1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269\": rpc error: code = NotFound desc = could not find container \"1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269\": container with ID starting with 1e3b51c147db5be8431faa0661c6bbd4938b1e46b691a9473b0ddefd6ab8f269 not found: ID does not exist" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.808935 4931 scope.go:117] "RemoveContainer" containerID="84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486" Jan 30 05:21:42 crc kubenswrapper[4931]: E0130 05:21:42.812097 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486\": container with ID starting with 84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486 not found: ID does not exist" containerID="84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.812139 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486"} err="failed to get container status \"84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486\": rpc error: code = NotFound desc = could not find container \"84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486\": container with ID starting with 84c7e5f808fcf810d715e22f2d08905c6c16fce44c23bc72deb01ddb3d056486 not found: ID does not exist" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.848482 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15d065d1-b45c-450a-9eec-aa929632433c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.975918 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzlbl"] Jan 30 05:21:42 crc kubenswrapper[4931]: I0130 05:21:42.981293 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dzlbl"] Jan 30 05:21:43 crc kubenswrapper[4931]: I0130 05:21:43.435982 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d065d1-b45c-450a-9eec-aa929632433c" path="/var/lib/kubelet/pods/15d065d1-b45c-450a-9eec-aa929632433c/volumes" Jan 30 05:21:43 crc kubenswrapper[4931]: I0130 05:21:43.660145 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" event={"ID":"39da06e0-e9ea-4570-b486-3c0d2fe79820","Type":"ContainerStarted","Data":"73b088c3d892eac744ad92115e819cd8cee8d8079242a5dca67e10016dc0ed8c"} Jan 30 05:21:43 crc kubenswrapper[4931]: I0130 05:21:43.660210 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" event={"ID":"39da06e0-e9ea-4570-b486-3c0d2fe79820","Type":"ContainerStarted","Data":"a1b1c4f3db08a3fca137107295c23f533b18590d886a51a91a92a351bf5f8b8a"} Jan 30 05:21:43 crc kubenswrapper[4931]: I0130 05:21:43.662365 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" event={"ID":"5049b2a6-f85e-4250-9b12-c70705adaf35","Type":"ContainerStarted","Data":"8c63d0bc957465018542479545f43be67531bf71e4edd2b213c708243a140698"} Jan 30 05:21:43 crc kubenswrapper[4931]: I0130 05:21:43.690095 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-qvz8h" podStartSLOduration=4.690059276 podStartE2EDuration="4.690059276s" podCreationTimestamp="2026-01-30 05:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:21:43.678548542 +0000 UTC m=+839.048458839" watchObservedRunningTime="2026-01-30 05:21:43.690059276 +0000 UTC m=+839.059969573" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.734077 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-8l2w4"] Jan 30 05:21:47 crc kubenswrapper[4931]: E0130 05:21:47.735718 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="extract-content" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.735820 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="extract-content" Jan 30 05:21:47 crc kubenswrapper[4931]: E0130 05:21:47.735909 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="extract-utilities" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.735988 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="extract-utilities" Jan 30 05:21:47 crc kubenswrapper[4931]: E0130 05:21:47.736075 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="registry-server" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.736146 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="registry-server" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.736347 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d065d1-b45c-450a-9eec-aa929632433c" containerName="registry-server" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.736905 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.740549 4931 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2wljs" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.741292 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-8l2w4"] Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.920315 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34b0cb15-9c48-4bb3-89e7-85efd5b8b76c-bound-sa-token\") pod \"cert-manager-545d4d4674-8l2w4\" (UID: \"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c\") " pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:47 crc kubenswrapper[4931]: I0130 05:21:47.920804 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgct\" (UniqueName: \"kubernetes.io/projected/34b0cb15-9c48-4bb3-89e7-85efd5b8b76c-kube-api-access-dwgct\") pod \"cert-manager-545d4d4674-8l2w4\" (UID: \"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c\") " pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.022023 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwgct\" (UniqueName: \"kubernetes.io/projected/34b0cb15-9c48-4bb3-89e7-85efd5b8b76c-kube-api-access-dwgct\") pod \"cert-manager-545d4d4674-8l2w4\" (UID: \"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c\") " pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.022909 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34b0cb15-9c48-4bb3-89e7-85efd5b8b76c-bound-sa-token\") pod \"cert-manager-545d4d4674-8l2w4\" (UID: \"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c\") " pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.054554 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34b0cb15-9c48-4bb3-89e7-85efd5b8b76c-bound-sa-token\") pod \"cert-manager-545d4d4674-8l2w4\" (UID: \"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c\") " pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.054931 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwgct\" (UniqueName: \"kubernetes.io/projected/34b0cb15-9c48-4bb3-89e7-85efd5b8b76c-kube-api-access-dwgct\") pod \"cert-manager-545d4d4674-8l2w4\" (UID: \"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c\") " pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.064789 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-8l2w4" Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.388856 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-8l2w4"] Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.716393 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-8l2w4" event={"ID":"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c","Type":"ContainerStarted","Data":"051fba3fd5334586372d6209b352cc8ea34ed84f5b2b4941f55de8ec5d2f3544"} Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.716821 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-8l2w4" event={"ID":"34b0cb15-9c48-4bb3-89e7-85efd5b8b76c","Type":"ContainerStarted","Data":"596f227167a4e51e0a7d9a47a2c3cbbd895224dfdea08bf45fee58abe909417c"} Jan 30 05:21:48 crc kubenswrapper[4931]: I0130 05:21:48.745456 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-8l2w4" podStartSLOduration=1.745405895 podStartE2EDuration="1.745405895s" podCreationTimestamp="2026-01-30 05:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:21:48.739131078 +0000 UTC m=+844.109041355" watchObservedRunningTime="2026-01-30 05:21:48.745405895 +0000 UTC m=+844.115316172" Jan 30 05:21:51 crc kubenswrapper[4931]: I0130 05:21:51.702960 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hsrfm" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.217533 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kpdtt"] Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.219664 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kpdtt" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.223675 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.234100 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.235070 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-29t8v" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.237676 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrtl\" (UniqueName: \"kubernetes.io/projected/3fd521ad-cd31-4827-99cf-2d78ddcf12ab-kube-api-access-mlrtl\") pod \"openstack-operator-index-kpdtt\" (UID: \"3fd521ad-cd31-4827-99cf-2d78ddcf12ab\") " pod="openstack-operators/openstack-operator-index-kpdtt" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.274595 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kpdtt"] Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.338613 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlrtl\" (UniqueName: \"kubernetes.io/projected/3fd521ad-cd31-4827-99cf-2d78ddcf12ab-kube-api-access-mlrtl\") pod \"openstack-operator-index-kpdtt\" (UID: \"3fd521ad-cd31-4827-99cf-2d78ddcf12ab\") " pod="openstack-operators/openstack-operator-index-kpdtt" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.369801 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlrtl\" (UniqueName: \"kubernetes.io/projected/3fd521ad-cd31-4827-99cf-2d78ddcf12ab-kube-api-access-mlrtl\") pod \"openstack-operator-index-kpdtt\" (UID: \"3fd521ad-cd31-4827-99cf-2d78ddcf12ab\") " pod="openstack-operators/openstack-operator-index-kpdtt" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.615178 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kpdtt" Jan 30 05:21:55 crc kubenswrapper[4931]: I0130 05:21:55.899604 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kpdtt"] Jan 30 05:21:55 crc kubenswrapper[4931]: W0130 05:21:55.909321 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fd521ad_cd31_4827_99cf_2d78ddcf12ab.slice/crio-c8b5caf549c14da8f9183cfe6a9eab8cca937640c3b84c97dfebb701545714e1 WatchSource:0}: Error finding container c8b5caf549c14da8f9183cfe6a9eab8cca937640c3b84c97dfebb701545714e1: Status 404 returned error can't find the container with id c8b5caf549c14da8f9183cfe6a9eab8cca937640c3b84c97dfebb701545714e1 Jan 30 05:21:56 crc kubenswrapper[4931]: I0130 05:21:56.797128 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kpdtt" event={"ID":"3fd521ad-cd31-4827-99cf-2d78ddcf12ab","Type":"ContainerStarted","Data":"c8b5caf549c14da8f9183cfe6a9eab8cca937640c3b84c97dfebb701545714e1"} Jan 30 05:21:57 crc kubenswrapper[4931]: I0130 05:21:57.804709 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kpdtt" event={"ID":"3fd521ad-cd31-4827-99cf-2d78ddcf12ab","Type":"ContainerStarted","Data":"fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d"} Jan 30 05:21:57 crc kubenswrapper[4931]: I0130 05:21:57.823026 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kpdtt" podStartSLOduration=1.547232207 podStartE2EDuration="2.823012494s" podCreationTimestamp="2026-01-30 05:21:55 +0000 UTC" firstStartedPulling="2026-01-30 05:21:55.913308072 +0000 UTC m=+851.283218329" lastFinishedPulling="2026-01-30 05:21:57.189088349 +0000 UTC m=+852.558998616" observedRunningTime="2026-01-30 05:21:57.819659665 +0000 UTC m=+853.189569912" watchObservedRunningTime="2026-01-30 05:21:57.823012494 +0000 UTC m=+853.192922751" Jan 30 05:21:58 crc kubenswrapper[4931]: I0130 05:21:58.587402 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kpdtt"] Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.194791 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7znpc"] Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.195994 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.208258 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvhm8\" (UniqueName: \"kubernetes.io/projected/2efa198c-4fe6-4ed2-9627-14a9ce525363-kube-api-access-hvhm8\") pod \"openstack-operator-index-7znpc\" (UID: \"2efa198c-4fe6-4ed2-9627-14a9ce525363\") " pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.220545 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7znpc"] Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.310076 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvhm8\" (UniqueName: \"kubernetes.io/projected/2efa198c-4fe6-4ed2-9627-14a9ce525363-kube-api-access-hvhm8\") pod \"openstack-operator-index-7znpc\" (UID: \"2efa198c-4fe6-4ed2-9627-14a9ce525363\") " pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.344631 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvhm8\" (UniqueName: \"kubernetes.io/projected/2efa198c-4fe6-4ed2-9627-14a9ce525363-kube-api-access-hvhm8\") pod \"openstack-operator-index-7znpc\" (UID: \"2efa198c-4fe6-4ed2-9627-14a9ce525363\") " pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.531298 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:21:59 crc kubenswrapper[4931]: I0130 05:21:59.820829 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kpdtt" podUID="3fd521ad-cd31-4827-99cf-2d78ddcf12ab" containerName="registry-server" containerID="cri-o://fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d" gracePeriod=2 Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.027765 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7znpc"] Jan 30 05:22:00 crc kubenswrapper[4931]: W0130 05:22:00.038184 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2efa198c_4fe6_4ed2_9627_14a9ce525363.slice/crio-b9eac2e3334db1a58d7b71ee7385d902a669e28be2fafb0e798e7885f51cc328 WatchSource:0}: Error finding container b9eac2e3334db1a58d7b71ee7385d902a669e28be2fafb0e798e7885f51cc328: Status 404 returned error can't find the container with id b9eac2e3334db1a58d7b71ee7385d902a669e28be2fafb0e798e7885f51cc328 Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.225541 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kpdtt" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.421940 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlrtl\" (UniqueName: \"kubernetes.io/projected/3fd521ad-cd31-4827-99cf-2d78ddcf12ab-kube-api-access-mlrtl\") pod \"3fd521ad-cd31-4827-99cf-2d78ddcf12ab\" (UID: \"3fd521ad-cd31-4827-99cf-2d78ddcf12ab\") " Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.439640 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd521ad-cd31-4827-99cf-2d78ddcf12ab-kube-api-access-mlrtl" (OuterVolumeSpecName: "kube-api-access-mlrtl") pod "3fd521ad-cd31-4827-99cf-2d78ddcf12ab" (UID: "3fd521ad-cd31-4827-99cf-2d78ddcf12ab"). InnerVolumeSpecName "kube-api-access-mlrtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.524813 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlrtl\" (UniqueName: \"kubernetes.io/projected/3fd521ad-cd31-4827-99cf-2d78ddcf12ab-kube-api-access-mlrtl\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.829654 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7znpc" event={"ID":"2efa198c-4fe6-4ed2-9627-14a9ce525363","Type":"ContainerStarted","Data":"88926be0d9c4f030e2810b4f687d3e0e633f3337293beb7ca2e2922237f4364d"} Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.829733 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7znpc" event={"ID":"2efa198c-4fe6-4ed2-9627-14a9ce525363","Type":"ContainerStarted","Data":"b9eac2e3334db1a58d7b71ee7385d902a669e28be2fafb0e798e7885f51cc328"} Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.831567 4931 generic.go:334] "Generic (PLEG): container finished" podID="3fd521ad-cd31-4827-99cf-2d78ddcf12ab" containerID="fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d" exitCode=0 Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.831644 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kpdtt" event={"ID":"3fd521ad-cd31-4827-99cf-2d78ddcf12ab","Type":"ContainerDied","Data":"fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d"} Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.831694 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kpdtt" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.831712 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kpdtt" event={"ID":"3fd521ad-cd31-4827-99cf-2d78ddcf12ab","Type":"ContainerDied","Data":"c8b5caf549c14da8f9183cfe6a9eab8cca937640c3b84c97dfebb701545714e1"} Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.831799 4931 scope.go:117] "RemoveContainer" containerID="fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.863549 4931 scope.go:117] "RemoveContainer" containerID="fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d" Jan 30 05:22:00 crc kubenswrapper[4931]: E0130 05:22:00.864301 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d\": container with ID starting with fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d not found: ID does not exist" containerID="fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.864365 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d"} err="failed to get container status \"fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d\": rpc error: code = NotFound desc = could not find container \"fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d\": container with ID starting with fba7625e8263d7e47fd7c432e2e98bf81d8fbf8f542e2b36f7fc6b008a3d242d not found: ID does not exist" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.864863 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7znpc" podStartSLOduration=1.4370506189999999 podStartE2EDuration="1.864841083s" podCreationTimestamp="2026-01-30 05:21:59 +0000 UTC" firstStartedPulling="2026-01-30 05:22:00.042654987 +0000 UTC m=+855.412565264" lastFinishedPulling="2026-01-30 05:22:00.470445471 +0000 UTC m=+855.840355728" observedRunningTime="2026-01-30 05:22:00.852620707 +0000 UTC m=+856.222531004" watchObservedRunningTime="2026-01-30 05:22:00.864841083 +0000 UTC m=+856.234751380" Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.884358 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kpdtt"] Jan 30 05:22:00 crc kubenswrapper[4931]: I0130 05:22:00.890884 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kpdtt"] Jan 30 05:22:01 crc kubenswrapper[4931]: I0130 05:22:01.436286 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd521ad-cd31-4827-99cf-2d78ddcf12ab" path="/var/lib/kubelet/pods/3fd521ad-cd31-4827-99cf-2d78ddcf12ab/volumes" Jan 30 05:22:09 crc kubenswrapper[4931]: I0130 05:22:09.532477 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:22:09 crc kubenswrapper[4931]: I0130 05:22:09.533273 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:22:09 crc kubenswrapper[4931]: I0130 05:22:09.587234 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:22:09 crc kubenswrapper[4931]: I0130 05:22:09.953675 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7znpc" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.309919 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff"] Jan 30 05:22:16 crc kubenswrapper[4931]: E0130 05:22:16.310599 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd521ad-cd31-4827-99cf-2d78ddcf12ab" containerName="registry-server" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.310621 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd521ad-cd31-4827-99cf-2d78ddcf12ab" containerName="registry-server" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.310817 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd521ad-cd31-4827-99cf-2d78ddcf12ab" containerName="registry-server" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.312199 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.315492 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mwjcp" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.328794 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff"] Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.507586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlg4v\" (UniqueName: \"kubernetes.io/projected/7fc41231-569f-429f-bcc3-d7d63888874b-kube-api-access-zlg4v\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.507657 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.507759 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.609523 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlg4v\" (UniqueName: \"kubernetes.io/projected/7fc41231-569f-429f-bcc3-d7d63888874b-kube-api-access-zlg4v\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.609598 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.609723 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.610548 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.611155 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.644293 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlg4v\" (UniqueName: \"kubernetes.io/projected/7fc41231-569f-429f-bcc3-d7d63888874b-kube-api-access-zlg4v\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:16 crc kubenswrapper[4931]: I0130 05:22:16.943685 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:17 crc kubenswrapper[4931]: I0130 05:22:17.485027 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff"] Jan 30 05:22:17 crc kubenswrapper[4931]: I0130 05:22:17.979316 4931 generic.go:334] "Generic (PLEG): container finished" podID="7fc41231-569f-429f-bcc3-d7d63888874b" containerID="1631dbc97906fa3af4c41cf4d6966ae7a42d7a3a73774352a5be878e540d98be" exitCode=0 Jan 30 05:22:17 crc kubenswrapper[4931]: I0130 05:22:17.979501 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" event={"ID":"7fc41231-569f-429f-bcc3-d7d63888874b","Type":"ContainerDied","Data":"1631dbc97906fa3af4c41cf4d6966ae7a42d7a3a73774352a5be878e540d98be"} Jan 30 05:22:17 crc kubenswrapper[4931]: I0130 05:22:17.979837 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" event={"ID":"7fc41231-569f-429f-bcc3-d7d63888874b","Type":"ContainerStarted","Data":"99df7d919fc8654b14cd44ee0356e56e18b7f6c0d432d68347ce64f41af5146f"} Jan 30 05:22:18 crc kubenswrapper[4931]: I0130 05:22:18.991281 4931 generic.go:334] "Generic (PLEG): container finished" podID="7fc41231-569f-429f-bcc3-d7d63888874b" containerID="70f5f3382fe3f09c39091adb22d065ed10d156811c6a803959c3ca9e2e50e29b" exitCode=0 Jan 30 05:22:18 crc kubenswrapper[4931]: I0130 05:22:18.991394 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" event={"ID":"7fc41231-569f-429f-bcc3-d7d63888874b","Type":"ContainerDied","Data":"70f5f3382fe3f09c39091adb22d065ed10d156811c6a803959c3ca9e2e50e29b"} Jan 30 05:22:20 crc kubenswrapper[4931]: I0130 05:22:20.022224 4931 generic.go:334] "Generic (PLEG): container finished" podID="7fc41231-569f-429f-bcc3-d7d63888874b" containerID="03aa25a03c595886ed804fdd0f68916aaddf4c83ea63500208775fc173916782" exitCode=0 Jan 30 05:22:20 crc kubenswrapper[4931]: I0130 05:22:20.022317 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" event={"ID":"7fc41231-569f-429f-bcc3-d7d63888874b","Type":"ContainerDied","Data":"03aa25a03c595886ed804fdd0f68916aaddf4c83ea63500208775fc173916782"} Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.392367 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.588608 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-util\") pod \"7fc41231-569f-429f-bcc3-d7d63888874b\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.588695 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-bundle\") pod \"7fc41231-569f-429f-bcc3-d7d63888874b\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.588793 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlg4v\" (UniqueName: \"kubernetes.io/projected/7fc41231-569f-429f-bcc3-d7d63888874b-kube-api-access-zlg4v\") pod \"7fc41231-569f-429f-bcc3-d7d63888874b\" (UID: \"7fc41231-569f-429f-bcc3-d7d63888874b\") " Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.589945 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-bundle" (OuterVolumeSpecName: "bundle") pod "7fc41231-569f-429f-bcc3-d7d63888874b" (UID: "7fc41231-569f-429f-bcc3-d7d63888874b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.599774 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc41231-569f-429f-bcc3-d7d63888874b-kube-api-access-zlg4v" (OuterVolumeSpecName: "kube-api-access-zlg4v") pod "7fc41231-569f-429f-bcc3-d7d63888874b" (UID: "7fc41231-569f-429f-bcc3-d7d63888874b"). InnerVolumeSpecName "kube-api-access-zlg4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.619276 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-util" (OuterVolumeSpecName: "util") pod "7fc41231-569f-429f-bcc3-d7d63888874b" (UID: "7fc41231-569f-429f-bcc3-d7d63888874b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.649171 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7ztfm"] Jan 30 05:22:21 crc kubenswrapper[4931]: E0130 05:22:21.649459 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc41231-569f-429f-bcc3-d7d63888874b" containerName="extract" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.649474 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc41231-569f-429f-bcc3-d7d63888874b" containerName="extract" Jan 30 05:22:21 crc kubenswrapper[4931]: E0130 05:22:21.649490 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc41231-569f-429f-bcc3-d7d63888874b" containerName="pull" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.649498 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc41231-569f-429f-bcc3-d7d63888874b" containerName="pull" Jan 30 05:22:21 crc kubenswrapper[4931]: E0130 05:22:21.649516 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc41231-569f-429f-bcc3-d7d63888874b" containerName="util" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.649524 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc41231-569f-429f-bcc3-d7d63888874b" containerName="util" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.649678 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc41231-569f-429f-bcc3-d7d63888874b" containerName="extract" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.650662 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.666859 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ztfm"] Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.690041 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlg4v\" (UniqueName: \"kubernetes.io/projected/7fc41231-569f-429f-bcc3-d7d63888874b-kube-api-access-zlg4v\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.690238 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-util\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.690359 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7fc41231-569f-429f-bcc3-d7d63888874b-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.792166 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6b2j\" (UniqueName: \"kubernetes.io/projected/49cdbfd0-6da5-4669-b468-c4622ed9d57e-kube-api-access-p6b2j\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.792519 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-catalog-content\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.792660 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-utilities\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.893222 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-catalog-content\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.893270 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-utilities\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.893322 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6b2j\" (UniqueName: \"kubernetes.io/projected/49cdbfd0-6da5-4669-b468-c4622ed9d57e-kube-api-access-p6b2j\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.894020 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-catalog-content\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.894551 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-utilities\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.916772 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6b2j\" (UniqueName: \"kubernetes.io/projected/49cdbfd0-6da5-4669-b468-c4622ed9d57e-kube-api-access-p6b2j\") pod \"community-operators-7ztfm\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:21 crc kubenswrapper[4931]: I0130 05:22:21.978257 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:22 crc kubenswrapper[4931]: I0130 05:22:22.037908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" event={"ID":"7fc41231-569f-429f-bcc3-d7d63888874b","Type":"ContainerDied","Data":"99df7d919fc8654b14cd44ee0356e56e18b7f6c0d432d68347ce64f41af5146f"} Jan 30 05:22:22 crc kubenswrapper[4931]: I0130 05:22:22.037954 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99df7d919fc8654b14cd44ee0356e56e18b7f6c0d432d68347ce64f41af5146f" Jan 30 05:22:22 crc kubenswrapper[4931]: I0130 05:22:22.037966 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff" Jan 30 05:22:22 crc kubenswrapper[4931]: I0130 05:22:22.447394 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ztfm"] Jan 30 05:22:23 crc kubenswrapper[4931]: I0130 05:22:23.048395 4931 generic.go:334] "Generic (PLEG): container finished" podID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerID="314d724b02d6787a215dc9b3f666ab62c4c4c772bf2b499443df86e2faaa1c65" exitCode=0 Jan 30 05:22:23 crc kubenswrapper[4931]: I0130 05:22:23.048515 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ztfm" event={"ID":"49cdbfd0-6da5-4669-b468-c4622ed9d57e","Type":"ContainerDied","Data":"314d724b02d6787a215dc9b3f666ab62c4c4c772bf2b499443df86e2faaa1c65"} Jan 30 05:22:23 crc kubenswrapper[4931]: I0130 05:22:23.048565 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ztfm" event={"ID":"49cdbfd0-6da5-4669-b468-c4622ed9d57e","Type":"ContainerStarted","Data":"3f94c7bde2b7138835d9baec6c7d669c847406880915fd1b9d23e1f123838119"} Jan 30 05:22:24 crc kubenswrapper[4931]: I0130 05:22:24.073526 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ztfm" event={"ID":"49cdbfd0-6da5-4669-b468-c4622ed9d57e","Type":"ContainerStarted","Data":"b7e2d85f8046162ebaf6700ba0b11f2c28515570d4dc72972405a6137dfd4e9d"} Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.043119 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qrzn"] Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.045148 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.061799 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qrzn"] Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.091443 4931 generic.go:334] "Generic (PLEG): container finished" podID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerID="b7e2d85f8046162ebaf6700ba0b11f2c28515570d4dc72972405a6137dfd4e9d" exitCode=0 Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.091492 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ztfm" event={"ID":"49cdbfd0-6da5-4669-b468-c4622ed9d57e","Type":"ContainerDied","Data":"b7e2d85f8046162ebaf6700ba0b11f2c28515570d4dc72972405a6137dfd4e9d"} Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.145380 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-utilities\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.145452 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2qfh\" (UniqueName: \"kubernetes.io/projected/2a095d89-69cc-45d2-89b3-f363ba80192b-kube-api-access-c2qfh\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.145495 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-catalog-content\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.246772 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-utilities\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.246826 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2qfh\" (UniqueName: \"kubernetes.io/projected/2a095d89-69cc-45d2-89b3-f363ba80192b-kube-api-access-c2qfh\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.246865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-catalog-content\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.247551 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-catalog-content\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.247584 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-utilities\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.284652 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2qfh\" (UniqueName: \"kubernetes.io/projected/2a095d89-69cc-45d2-89b3-f363ba80192b-kube-api-access-c2qfh\") pod \"certified-operators-6qrzn\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.371011 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:25 crc kubenswrapper[4931]: I0130 05:22:25.685364 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qrzn"] Jan 30 05:22:26 crc kubenswrapper[4931]: I0130 05:22:26.101313 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ztfm" event={"ID":"49cdbfd0-6da5-4669-b468-c4622ed9d57e","Type":"ContainerStarted","Data":"662675c88d6a303836d0fba35c2d80ebd8b2008c91220499fa1991520747f37e"} Jan 30 05:22:26 crc kubenswrapper[4931]: I0130 05:22:26.104535 4931 generic.go:334] "Generic (PLEG): container finished" podID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerID="6f5368956d7211b22c5fc65e3edeafd0d348af26b0c6292fae2e9e7510e83b64" exitCode=0 Jan 30 05:22:26 crc kubenswrapper[4931]: I0130 05:22:26.104578 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrzn" event={"ID":"2a095d89-69cc-45d2-89b3-f363ba80192b","Type":"ContainerDied","Data":"6f5368956d7211b22c5fc65e3edeafd0d348af26b0c6292fae2e9e7510e83b64"} Jan 30 05:22:26 crc kubenswrapper[4931]: I0130 05:22:26.104611 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrzn" event={"ID":"2a095d89-69cc-45d2-89b3-f363ba80192b","Type":"ContainerStarted","Data":"e14a4370cfa1b43d0fed3626b0eff804af5fd5b2c13e420d03f69008172714ba"} Jan 30 05:22:26 crc kubenswrapper[4931]: I0130 05:22:26.122186 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7ztfm" podStartSLOduration=2.667332144 podStartE2EDuration="5.122170308s" podCreationTimestamp="2026-01-30 05:22:21 +0000 UTC" firstStartedPulling="2026-01-30 05:22:23.051452882 +0000 UTC m=+878.421363179" lastFinishedPulling="2026-01-30 05:22:25.506291086 +0000 UTC m=+880.876201343" observedRunningTime="2026-01-30 05:22:26.120197822 +0000 UTC m=+881.490108109" watchObservedRunningTime="2026-01-30 05:22:26.122170308 +0000 UTC m=+881.492080565" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.121626 4931 generic.go:334] "Generic (PLEG): container finished" podID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerID="cf8658320b6c2873656da860206e90f8011ee8e08a294c3eccdfc50db5066b48" exitCode=0 Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.123103 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrzn" event={"ID":"2a095d89-69cc-45d2-89b3-f363ba80192b","Type":"ContainerDied","Data":"cf8658320b6c2873656da860206e90f8011ee8e08a294c3eccdfc50db5066b48"} Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.301524 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9"] Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.303334 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.307313 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-7pfmh" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.325580 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9"] Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.362874 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.362935 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.475951 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dthxp\" (UniqueName: \"kubernetes.io/projected/27c443b8-82d2-41c1-b747-b89e6cb44f16-kube-api-access-dthxp\") pod \"openstack-operator-controller-init-757f46c65d-rscb9\" (UID: \"27c443b8-82d2-41c1-b747-b89e6cb44f16\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.577381 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dthxp\" (UniqueName: \"kubernetes.io/projected/27c443b8-82d2-41c1-b747-b89e6cb44f16-kube-api-access-dthxp\") pod \"openstack-operator-controller-init-757f46c65d-rscb9\" (UID: \"27c443b8-82d2-41c1-b747-b89e6cb44f16\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.599829 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dthxp\" (UniqueName: \"kubernetes.io/projected/27c443b8-82d2-41c1-b747-b89e6cb44f16-kube-api-access-dthxp\") pod \"openstack-operator-controller-init-757f46c65d-rscb9\" (UID: \"27c443b8-82d2-41c1-b747-b89e6cb44f16\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.629933 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" Jan 30 05:22:27 crc kubenswrapper[4931]: I0130 05:22:27.906983 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9"] Jan 30 05:22:27 crc kubenswrapper[4931]: W0130 05:22:27.915788 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27c443b8_82d2_41c1_b747_b89e6cb44f16.slice/crio-57c02a6fe2a6af8364a73d91ebf8b1df7f7ce70faeba0627d6f563cf4c4951bc WatchSource:0}: Error finding container 57c02a6fe2a6af8364a73d91ebf8b1df7f7ce70faeba0627d6f563cf4c4951bc: Status 404 returned error can't find the container with id 57c02a6fe2a6af8364a73d91ebf8b1df7f7ce70faeba0627d6f563cf4c4951bc Jan 30 05:22:28 crc kubenswrapper[4931]: I0130 05:22:28.131375 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" event={"ID":"27c443b8-82d2-41c1-b747-b89e6cb44f16","Type":"ContainerStarted","Data":"57c02a6fe2a6af8364a73d91ebf8b1df7f7ce70faeba0627d6f563cf4c4951bc"} Jan 30 05:22:28 crc kubenswrapper[4931]: I0130 05:22:28.134862 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrzn" event={"ID":"2a095d89-69cc-45d2-89b3-f363ba80192b","Type":"ContainerStarted","Data":"cfc2d03f7956dc0e0818cdc83356a4e8e3fce5194a922940559190e5c250570f"} Jan 30 05:22:28 crc kubenswrapper[4931]: I0130 05:22:28.158130 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qrzn" podStartSLOduration=1.711368295 podStartE2EDuration="3.158112226s" podCreationTimestamp="2026-01-30 05:22:25 +0000 UTC" firstStartedPulling="2026-01-30 05:22:26.106338397 +0000 UTC m=+881.476248684" lastFinishedPulling="2026-01-30 05:22:27.553082358 +0000 UTC m=+882.922992615" observedRunningTime="2026-01-30 05:22:28.155384182 +0000 UTC m=+883.525294459" watchObservedRunningTime="2026-01-30 05:22:28.158112226 +0000 UTC m=+883.528022483" Jan 30 05:22:31 crc kubenswrapper[4931]: I0130 05:22:31.979459 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:31 crc kubenswrapper[4931]: I0130 05:22:31.979833 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:32 crc kubenswrapper[4931]: I0130 05:22:32.040983 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:32 crc kubenswrapper[4931]: I0130 05:22:32.245049 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:33 crc kubenswrapper[4931]: I0130 05:22:33.179288 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" event={"ID":"27c443b8-82d2-41c1-b747-b89e6cb44f16","Type":"ContainerStarted","Data":"0a18f96cc0c8c1e6c5347eb4ea7dc68d3091fed656839711a9073b590c3d7621"} Jan 30 05:22:33 crc kubenswrapper[4931]: I0130 05:22:33.179403 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" Jan 30 05:22:33 crc kubenswrapper[4931]: I0130 05:22:33.224702 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" podStartSLOduration=1.6786490120000002 podStartE2EDuration="6.224677296s" podCreationTimestamp="2026-01-30 05:22:27 +0000 UTC" firstStartedPulling="2026-01-30 05:22:27.918370368 +0000 UTC m=+883.288280625" lastFinishedPulling="2026-01-30 05:22:32.464398652 +0000 UTC m=+887.834308909" observedRunningTime="2026-01-30 05:22:33.220344836 +0000 UTC m=+888.590255133" watchObservedRunningTime="2026-01-30 05:22:33.224677296 +0000 UTC m=+888.594587593" Jan 30 05:22:34 crc kubenswrapper[4931]: I0130 05:22:34.430260 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ztfm"] Jan 30 05:22:34 crc kubenswrapper[4931]: I0130 05:22:34.430818 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7ztfm" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="registry-server" containerID="cri-o://662675c88d6a303836d0fba35c2d80ebd8b2008c91220499fa1991520747f37e" gracePeriod=2 Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.200013 4931 generic.go:334] "Generic (PLEG): container finished" podID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerID="662675c88d6a303836d0fba35c2d80ebd8b2008c91220499fa1991520747f37e" exitCode=0 Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.200054 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ztfm" event={"ID":"49cdbfd0-6da5-4669-b468-c4622ed9d57e","Type":"ContainerDied","Data":"662675c88d6a303836d0fba35c2d80ebd8b2008c91220499fa1991520747f37e"} Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.371250 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.371558 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.375590 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.456475 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.487952 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-utilities\") pod \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.488034 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6b2j\" (UniqueName: \"kubernetes.io/projected/49cdbfd0-6da5-4669-b468-c4622ed9d57e-kube-api-access-p6b2j\") pod \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.488098 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-catalog-content\") pod \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\" (UID: \"49cdbfd0-6da5-4669-b468-c4622ed9d57e\") " Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.490789 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-utilities" (OuterVolumeSpecName: "utilities") pod "49cdbfd0-6da5-4669-b468-c4622ed9d57e" (UID: "49cdbfd0-6da5-4669-b468-c4622ed9d57e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.507472 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49cdbfd0-6da5-4669-b468-c4622ed9d57e-kube-api-access-p6b2j" (OuterVolumeSpecName: "kube-api-access-p6b2j") pod "49cdbfd0-6da5-4669-b468-c4622ed9d57e" (UID: "49cdbfd0-6da5-4669-b468-c4622ed9d57e"). InnerVolumeSpecName "kube-api-access-p6b2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.575507 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49cdbfd0-6da5-4669-b468-c4622ed9d57e" (UID: "49cdbfd0-6da5-4669-b468-c4622ed9d57e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.590062 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.590114 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6b2j\" (UniqueName: \"kubernetes.io/projected/49cdbfd0-6da5-4669-b468-c4622ed9d57e-kube-api-access-p6b2j\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:35 crc kubenswrapper[4931]: I0130 05:22:35.590143 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49cdbfd0-6da5-4669-b468-c4622ed9d57e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.213872 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ztfm" event={"ID":"49cdbfd0-6da5-4669-b468-c4622ed9d57e","Type":"ContainerDied","Data":"3f94c7bde2b7138835d9baec6c7d669c847406880915fd1b9d23e1f123838119"} Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.213955 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ztfm" Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.213970 4931 scope.go:117] "RemoveContainer" containerID="662675c88d6a303836d0fba35c2d80ebd8b2008c91220499fa1991520747f37e" Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.240237 4931 scope.go:117] "RemoveContainer" containerID="b7e2d85f8046162ebaf6700ba0b11f2c28515570d4dc72972405a6137dfd4e9d" Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.283806 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ztfm"] Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.284000 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.286264 4931 scope.go:117] "RemoveContainer" containerID="314d724b02d6787a215dc9b3f666ab62c4c4c772bf2b499443df86e2faaa1c65" Jan 30 05:22:36 crc kubenswrapper[4931]: I0130 05:22:36.287729 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7ztfm"] Jan 30 05:22:37 crc kubenswrapper[4931]: I0130 05:22:37.435793 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" path="/var/lib/kubelet/pods/49cdbfd0-6da5-4669-b468-c4622ed9d57e/volumes" Jan 30 05:22:37 crc kubenswrapper[4931]: I0130 05:22:37.634176 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-rscb9" Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.037139 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qrzn"] Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.037553 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qrzn" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="registry-server" containerID="cri-o://cfc2d03f7956dc0e0818cdc83356a4e8e3fce5194a922940559190e5c250570f" gracePeriod=2 Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.245517 4931 generic.go:334] "Generic (PLEG): container finished" podID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerID="cfc2d03f7956dc0e0818cdc83356a4e8e3fce5194a922940559190e5c250570f" exitCode=0 Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.245742 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrzn" event={"ID":"2a095d89-69cc-45d2-89b3-f363ba80192b","Type":"ContainerDied","Data":"cfc2d03f7956dc0e0818cdc83356a4e8e3fce5194a922940559190e5c250570f"} Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.504759 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.654854 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-utilities\") pod \"2a095d89-69cc-45d2-89b3-f363ba80192b\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.654947 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-catalog-content\") pod \"2a095d89-69cc-45d2-89b3-f363ba80192b\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.655036 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2qfh\" (UniqueName: \"kubernetes.io/projected/2a095d89-69cc-45d2-89b3-f363ba80192b-kube-api-access-c2qfh\") pod \"2a095d89-69cc-45d2-89b3-f363ba80192b\" (UID: \"2a095d89-69cc-45d2-89b3-f363ba80192b\") " Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.656725 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-utilities" (OuterVolumeSpecName: "utilities") pod "2a095d89-69cc-45d2-89b3-f363ba80192b" (UID: "2a095d89-69cc-45d2-89b3-f363ba80192b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.660464 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a095d89-69cc-45d2-89b3-f363ba80192b-kube-api-access-c2qfh" (OuterVolumeSpecName: "kube-api-access-c2qfh") pod "2a095d89-69cc-45d2-89b3-f363ba80192b" (UID: "2a095d89-69cc-45d2-89b3-f363ba80192b"). InnerVolumeSpecName "kube-api-access-c2qfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.739676 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a095d89-69cc-45d2-89b3-f363ba80192b" (UID: "2a095d89-69cc-45d2-89b3-f363ba80192b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.757071 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2qfh\" (UniqueName: \"kubernetes.io/projected/2a095d89-69cc-45d2-89b3-f363ba80192b-kube-api-access-c2qfh\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.757126 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:39 crc kubenswrapper[4931]: I0130 05:22:39.757147 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a095d89-69cc-45d2-89b3-f363ba80192b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:40 crc kubenswrapper[4931]: I0130 05:22:40.256671 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrzn" event={"ID":"2a095d89-69cc-45d2-89b3-f363ba80192b","Type":"ContainerDied","Data":"e14a4370cfa1b43d0fed3626b0eff804af5fd5b2c13e420d03f69008172714ba"} Jan 30 05:22:40 crc kubenswrapper[4931]: I0130 05:22:40.256743 4931 scope.go:117] "RemoveContainer" containerID="cfc2d03f7956dc0e0818cdc83356a4e8e3fce5194a922940559190e5c250570f" Jan 30 05:22:40 crc kubenswrapper[4931]: I0130 05:22:40.256921 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrzn" Jan 30 05:22:40 crc kubenswrapper[4931]: I0130 05:22:40.284745 4931 scope.go:117] "RemoveContainer" containerID="cf8658320b6c2873656da860206e90f8011ee8e08a294c3eccdfc50db5066b48" Jan 30 05:22:40 crc kubenswrapper[4931]: I0130 05:22:40.337340 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qrzn"] Jan 30 05:22:40 crc kubenswrapper[4931]: I0130 05:22:40.346212 4931 scope.go:117] "RemoveContainer" containerID="6f5368956d7211b22c5fc65e3edeafd0d348af26b0c6292fae2e9e7510e83b64" Jan 30 05:22:40 crc kubenswrapper[4931]: I0130 05:22:40.349381 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qrzn"] Jan 30 05:22:41 crc kubenswrapper[4931]: I0130 05:22:41.611602 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" path="/var/lib/kubelet/pods/2a095d89-69cc-45d2-89b3-f363ba80192b/volumes" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.292241 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qhkx7"] Jan 30 05:22:44 crc kubenswrapper[4931]: E0130 05:22:44.292610 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="extract-utilities" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.292631 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="extract-utilities" Jan 30 05:22:44 crc kubenswrapper[4931]: E0130 05:22:44.292667 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="registry-server" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.293073 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="registry-server" Jan 30 05:22:44 crc kubenswrapper[4931]: E0130 05:22:44.293116 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="registry-server" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.293129 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="registry-server" Jan 30 05:22:44 crc kubenswrapper[4931]: E0130 05:22:44.293154 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="extract-content" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.293168 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="extract-content" Jan 30 05:22:44 crc kubenswrapper[4931]: E0130 05:22:44.293184 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="extract-utilities" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.293196 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="extract-utilities" Jan 30 05:22:44 crc kubenswrapper[4931]: E0130 05:22:44.293216 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="extract-content" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.293228 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="extract-content" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.293461 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a095d89-69cc-45d2-89b3-f363ba80192b" containerName="registry-server" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.293488 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="49cdbfd0-6da5-4669-b468-c4622ed9d57e" containerName="registry-server" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.294847 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.311646 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhkx7"] Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.444927 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-utilities\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.445023 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb44p\" (UniqueName: \"kubernetes.io/projected/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-kube-api-access-hb44p\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.445191 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-catalog-content\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.547035 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb44p\" (UniqueName: \"kubernetes.io/projected/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-kube-api-access-hb44p\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.547094 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-catalog-content\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.547250 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-utilities\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.547677 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-utilities\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.548468 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-catalog-content\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.570475 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb44p\" (UniqueName: \"kubernetes.io/projected/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-kube-api-access-hb44p\") pod \"redhat-marketplace-qhkx7\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.619118 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:44 crc kubenswrapper[4931]: I0130 05:22:44.869502 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhkx7"] Jan 30 05:22:45 crc kubenswrapper[4931]: I0130 05:22:45.298889 4931 generic.go:334] "Generic (PLEG): container finished" podID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerID="3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e" exitCode=0 Jan 30 05:22:45 crc kubenswrapper[4931]: I0130 05:22:45.298934 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhkx7" event={"ID":"53b1acfb-8c75-448d-9ef1-94eb07b92e6b","Type":"ContainerDied","Data":"3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e"} Jan 30 05:22:45 crc kubenswrapper[4931]: I0130 05:22:45.298988 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhkx7" event={"ID":"53b1acfb-8c75-448d-9ef1-94eb07b92e6b","Type":"ContainerStarted","Data":"0fc85161717a539fa8da0c468567aa7f1e508cd927e38c401b64d7b954772d30"} Jan 30 05:22:46 crc kubenswrapper[4931]: I0130 05:22:46.309069 4931 generic.go:334] "Generic (PLEG): container finished" podID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerID="e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2" exitCode=0 Jan 30 05:22:46 crc kubenswrapper[4931]: I0130 05:22:46.309279 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhkx7" event={"ID":"53b1acfb-8c75-448d-9ef1-94eb07b92e6b","Type":"ContainerDied","Data":"e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2"} Jan 30 05:22:47 crc kubenswrapper[4931]: I0130 05:22:47.323514 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhkx7" event={"ID":"53b1acfb-8c75-448d-9ef1-94eb07b92e6b","Type":"ContainerStarted","Data":"93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9"} Jan 30 05:22:47 crc kubenswrapper[4931]: I0130 05:22:47.352971 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qhkx7" podStartSLOduration=1.890929268 podStartE2EDuration="3.352946081s" podCreationTimestamp="2026-01-30 05:22:44 +0000 UTC" firstStartedPulling="2026-01-30 05:22:45.300466132 +0000 UTC m=+900.670376429" lastFinishedPulling="2026-01-30 05:22:46.762482975 +0000 UTC m=+902.132393242" observedRunningTime="2026-01-30 05:22:47.348842397 +0000 UTC m=+902.718752694" watchObservedRunningTime="2026-01-30 05:22:47.352946081 +0000 UTC m=+902.722856368" Jan 30 05:22:54 crc kubenswrapper[4931]: I0130 05:22:54.620723 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:54 crc kubenswrapper[4931]: I0130 05:22:54.621271 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:54 crc kubenswrapper[4931]: I0130 05:22:54.691701 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:55 crc kubenswrapper[4931]: I0130 05:22:55.450631 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:56 crc kubenswrapper[4931]: I0130 05:22:56.643930 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhkx7"] Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.363456 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.363535 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.397606 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qhkx7" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="registry-server" containerID="cri-o://93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9" gracePeriod=2 Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.783575 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.874563 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb44p\" (UniqueName: \"kubernetes.io/projected/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-kube-api-access-hb44p\") pod \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.874650 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-utilities\") pod \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.874677 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-catalog-content\") pod \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\" (UID: \"53b1acfb-8c75-448d-9ef1-94eb07b92e6b\") " Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.875389 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-utilities" (OuterVolumeSpecName: "utilities") pod "53b1acfb-8c75-448d-9ef1-94eb07b92e6b" (UID: "53b1acfb-8c75-448d-9ef1-94eb07b92e6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.894704 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53b1acfb-8c75-448d-9ef1-94eb07b92e6b" (UID: "53b1acfb-8c75-448d-9ef1-94eb07b92e6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.895199 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-kube-api-access-hb44p" (OuterVolumeSpecName: "kube-api-access-hb44p") pod "53b1acfb-8c75-448d-9ef1-94eb07b92e6b" (UID: "53b1acfb-8c75-448d-9ef1-94eb07b92e6b"). InnerVolumeSpecName "kube-api-access-hb44p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.975337 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb44p\" (UniqueName: \"kubernetes.io/projected/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-kube-api-access-hb44p\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.975368 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:57 crc kubenswrapper[4931]: I0130 05:22:57.975379 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53b1acfb-8c75-448d-9ef1-94eb07b92e6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.408835 4931 generic.go:334] "Generic (PLEG): container finished" podID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerID="93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9" exitCode=0 Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.408936 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qhkx7" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.408958 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhkx7" event={"ID":"53b1acfb-8c75-448d-9ef1-94eb07b92e6b","Type":"ContainerDied","Data":"93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9"} Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.409070 4931 scope.go:117] "RemoveContainer" containerID="93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.409234 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qhkx7" event={"ID":"53b1acfb-8c75-448d-9ef1-94eb07b92e6b","Type":"ContainerDied","Data":"0fc85161717a539fa8da0c468567aa7f1e508cd927e38c401b64d7b954772d30"} Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.441821 4931 scope.go:117] "RemoveContainer" containerID="e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.455028 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhkx7"] Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.461156 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qhkx7"] Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.468239 4931 scope.go:117] "RemoveContainer" containerID="3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.513683 4931 scope.go:117] "RemoveContainer" containerID="93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9" Jan 30 05:22:58 crc kubenswrapper[4931]: E0130 05:22:58.514104 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9\": container with ID starting with 93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9 not found: ID does not exist" containerID="93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.514146 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9"} err="failed to get container status \"93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9\": rpc error: code = NotFound desc = could not find container \"93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9\": container with ID starting with 93cb27deefd25bc0a0cfa7742cbed5fe5edbbc049bfbd9f8b6f8883f1db330d9 not found: ID does not exist" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.514170 4931 scope.go:117] "RemoveContainer" containerID="e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2" Jan 30 05:22:58 crc kubenswrapper[4931]: E0130 05:22:58.514591 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2\": container with ID starting with e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2 not found: ID does not exist" containerID="e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.514616 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2"} err="failed to get container status \"e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2\": rpc error: code = NotFound desc = could not find container \"e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2\": container with ID starting with e9b29114ef6554ae05e9356410b8f0c63040eddcdda9e52cdd177b28297f60b2 not found: ID does not exist" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.514632 4931 scope.go:117] "RemoveContainer" containerID="3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e" Jan 30 05:22:58 crc kubenswrapper[4931]: E0130 05:22:58.515249 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e\": container with ID starting with 3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e not found: ID does not exist" containerID="3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e" Jan 30 05:22:58 crc kubenswrapper[4931]: I0130 05:22:58.515278 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e"} err="failed to get container status \"3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e\": rpc error: code = NotFound desc = could not find container \"3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e\": container with ID starting with 3f4c917911a567a55fdcec78e67814ce33c0c7e69484fc283691b551ba36eb0e not found: ID does not exist" Jan 30 05:22:59 crc kubenswrapper[4931]: I0130 05:22:59.434158 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" path="/var/lib/kubelet/pods/53b1acfb-8c75-448d-9ef1-94eb07b92e6b/volumes" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.031204 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk"] Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.032059 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="extract-content" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.032102 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="extract-content" Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.032118 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="extract-utilities" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.032128 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="extract-utilities" Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.032143 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="registry-server" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.032151 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="registry-server" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.032283 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b1acfb-8c75-448d-9ef1-94eb07b92e6b" containerName="registry-server" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.032865 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.034526 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-pjqv5" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.035022 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.035754 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.037071 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-z754h" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.043693 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.052914 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.058615 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.059834 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.063801 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pwqfx" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.076461 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.077242 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.080256 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.082309 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4hght" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.096524 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.099129 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.101375 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.103029 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vrmds" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.108061 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.129572 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.130543 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.132698 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.133231 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bd57t" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.164602 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.165301 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.172944 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.173925 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.177030 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.177340 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5fpjl" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.177581 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lrgxk" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.187153 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.191631 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.192477 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqxq\" (UniqueName: \"kubernetes.io/projected/dea1ae69-0c15-4228-a323-dc6f762e3c82-kube-api-access-fjqxq\") pod \"designate-operator-controller-manager-6d9697b7f4-bf56z\" (UID: \"dea1ae69-0c15-4228-a323-dc6f762e3c82\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.192517 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjd2d\" (UniqueName: \"kubernetes.io/projected/eb76dd84-30db-4769-852c-9a42814949d7-kube-api-access-xjd2d\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-mttxk\" (UID: \"eb76dd84-30db-4769-852c-9a42814949d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.192549 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6s9\" (UniqueName: \"kubernetes.io/projected/80b25db7-e1c2-4787-89f4-952cd7e845ba-kube-api-access-ff6s9\") pod \"cinder-operator-controller-manager-8d874c8fc-4wv6z\" (UID: \"80b25db7-e1c2-4787-89f4-952cd7e845ba\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.192611 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n968v\" (UniqueName: \"kubernetes.io/projected/2773429e-ccbb-43a4-a88a-a1cd41a63e10-kube-api-access-n968v\") pod \"glance-operator-controller-manager-8886f4c47-nsn26\" (UID: \"2773429e-ccbb-43a4-a88a-a1cd41a63e10\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.192952 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.196893 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jfhnm" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.200291 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.236238 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.262394 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.275631 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.283828 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xggwq" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.285085 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.285845 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.287214 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-knbw2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.290986 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293353 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zllrk\" (UniqueName: \"kubernetes.io/projected/ce7feb31-22f3-42d9-83b1-cd9155abae99-kube-api-access-zllrk\") pod \"horizon-operator-controller-manager-5fb775575f-l5dv2\" (UID: \"ce7feb31-22f3-42d9-83b1-cd9155abae99\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n968v\" (UniqueName: \"kubernetes.io/projected/2773429e-ccbb-43a4-a88a-a1cd41a63e10-kube-api-access-n968v\") pod \"glance-operator-controller-manager-8886f4c47-nsn26\" (UID: \"2773429e-ccbb-43a4-a88a-a1cd41a63e10\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293432 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6q9\" (UniqueName: \"kubernetes.io/projected/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-kube-api-access-7s6q9\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293459 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqxq\" (UniqueName: \"kubernetes.io/projected/dea1ae69-0c15-4228-a323-dc6f762e3c82-kube-api-access-fjqxq\") pod \"designate-operator-controller-manager-6d9697b7f4-bf56z\" (UID: \"dea1ae69-0c15-4228-a323-dc6f762e3c82\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293478 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qq8\" (UniqueName: \"kubernetes.io/projected/33b18ace-2da3-4bad-b093-d7db2aad7f50-kube-api-access-j2qq8\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v9fgj\" (UID: \"33b18ace-2da3-4bad-b093-d7db2aad7f50\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293499 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjd2d\" (UniqueName: \"kubernetes.io/projected/eb76dd84-30db-4769-852c-9a42814949d7-kube-api-access-xjd2d\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-mttxk\" (UID: \"eb76dd84-30db-4769-852c-9a42814949d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293520 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293542 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6s9\" (UniqueName: \"kubernetes.io/projected/80b25db7-e1c2-4787-89f4-952cd7e845ba-kube-api-access-ff6s9\") pod \"cinder-operator-controller-manager-8d874c8fc-4wv6z\" (UID: \"80b25db7-e1c2-4787-89f4-952cd7e845ba\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293558 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxdn\" (UniqueName: \"kubernetes.io/projected/cc5025a4-0807-478d-831a-c6ed424628a9-kube-api-access-qhxdn\") pod \"keystone-operator-controller-manager-84f48565d4-ddtbw\" (UID: \"cc5025a4-0807-478d-831a-c6ed424628a9\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.293583 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhpd\" (UniqueName: \"kubernetes.io/projected/d806e5bf-8346-46c0-a3de-5f8412e92b4f-kube-api-access-ldhpd\") pod \"heat-operator-controller-manager-69d6db494d-lmgq2\" (UID: \"d806e5bf-8346-46c0-a3de-5f8412e92b4f\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.314567 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.317239 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6s9\" (UniqueName: \"kubernetes.io/projected/80b25db7-e1c2-4787-89f4-952cd7e845ba-kube-api-access-ff6s9\") pod \"cinder-operator-controller-manager-8d874c8fc-4wv6z\" (UID: \"80b25db7-e1c2-4787-89f4-952cd7e845ba\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.317939 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqxq\" (UniqueName: \"kubernetes.io/projected/dea1ae69-0c15-4228-a323-dc6f762e3c82-kube-api-access-fjqxq\") pod \"designate-operator-controller-manager-6d9697b7f4-bf56z\" (UID: \"dea1ae69-0c15-4228-a323-dc6f762e3c82\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.320068 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.321444 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.322210 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjd2d\" (UniqueName: \"kubernetes.io/projected/eb76dd84-30db-4769-852c-9a42814949d7-kube-api-access-xjd2d\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-mttxk\" (UID: \"eb76dd84-30db-4769-852c-9a42814949d7\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.323313 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2zrxv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.323650 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n968v\" (UniqueName: \"kubernetes.io/projected/2773429e-ccbb-43a4-a88a-a1cd41a63e10-kube-api-access-n968v\") pod \"glance-operator-controller-manager-8886f4c47-nsn26\" (UID: \"2773429e-ccbb-43a4-a88a-a1cd41a63e10\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.330410 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.331155 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.333808 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hdxvm" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.346498 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.347588 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.350907 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fn7xp" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.351049 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.355157 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.356154 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.361269 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.369384 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.375784 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.376573 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.377954 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.387146 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.388713 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.390536 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.390773 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ppcml" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.390935 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zl4cx" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.391333 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.393908 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.394549 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395193 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxdn\" (UniqueName: \"kubernetes.io/projected/cc5025a4-0807-478d-831a-c6ed424628a9-kube-api-access-qhxdn\") pod \"keystone-operator-controller-manager-84f48565d4-ddtbw\" (UID: \"cc5025a4-0807-478d-831a-c6ed424628a9\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395225 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhpd\" (UniqueName: \"kubernetes.io/projected/d806e5bf-8346-46c0-a3de-5f8412e92b4f-kube-api-access-ldhpd\") pod \"heat-operator-controller-manager-69d6db494d-lmgq2\" (UID: \"d806e5bf-8346-46c0-a3de-5f8412e92b4f\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395273 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhb6\" (UniqueName: \"kubernetes.io/projected/8553945b-dfe3-4c77-bb73-dce58c6ad3ba-kube-api-access-7rhb6\") pod \"mariadb-operator-controller-manager-67bf948998-fsdvn\" (UID: \"8553945b-dfe3-4c77-bb73-dce58c6ad3ba\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395294 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zllrk\" (UniqueName: \"kubernetes.io/projected/ce7feb31-22f3-42d9-83b1-cd9155abae99-kube-api-access-zllrk\") pod \"horizon-operator-controller-manager-5fb775575f-l5dv2\" (UID: \"ce7feb31-22f3-42d9-83b1-cd9155abae99\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395320 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchc4\" (UniqueName: \"kubernetes.io/projected/a3f6ed4d-518f-4415-9378-73fca072d431-kube-api-access-xchc4\") pod \"manila-operator-controller-manager-7dd968899f-5sgtg\" (UID: \"a3f6ed4d-518f-4415-9378-73fca072d431\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395338 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6q9\" (UniqueName: \"kubernetes.io/projected/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-kube-api-access-7s6q9\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395362 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qq8\" (UniqueName: \"kubernetes.io/projected/33b18ace-2da3-4bad-b093-d7db2aad7f50-kube-api-access-j2qq8\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v9fgj\" (UID: \"33b18ace-2da3-4bad-b093-d7db2aad7f50\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.395385 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.395481 4931 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.395517 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert podName:29ae7a52-ff32-4f97-8f6c-830ac4e4b40b nodeName:}" failed. No retries permitted until 2026-01-30 05:23:18.895503377 +0000 UTC m=+934.265413634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert") pod "infra-operator-controller-manager-79955696d6-tzxqv" (UID: "29ae7a52-ff32-4f97-8f6c-830ac4e4b40b") : secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.396410 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-tx9mf" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.396698 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.407526 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.409165 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.416992 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhpd\" (UniqueName: \"kubernetes.io/projected/d806e5bf-8346-46c0-a3de-5f8412e92b4f-kube-api-access-ldhpd\") pod \"heat-operator-controller-manager-69d6db494d-lmgq2\" (UID: \"d806e5bf-8346-46c0-a3de-5f8412e92b4f\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.422855 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6q9\" (UniqueName: \"kubernetes.io/projected/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-kube-api-access-7s6q9\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.422862 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.427695 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zllrk\" (UniqueName: \"kubernetes.io/projected/ce7feb31-22f3-42d9-83b1-cd9155abae99-kube-api-access-zllrk\") pod \"horizon-operator-controller-manager-5fb775575f-l5dv2\" (UID: \"ce7feb31-22f3-42d9-83b1-cd9155abae99\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.429970 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxdn\" (UniqueName: \"kubernetes.io/projected/cc5025a4-0807-478d-831a-c6ed424628a9-kube-api-access-qhxdn\") pod \"keystone-operator-controller-manager-84f48565d4-ddtbw\" (UID: \"cc5025a4-0807-478d-831a-c6ed424628a9\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.431178 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.432027 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.434859 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-p2zpj" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.437347 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qq8\" (UniqueName: \"kubernetes.io/projected/33b18ace-2da3-4bad-b093-d7db2aad7f50-kube-api-access-j2qq8\") pod \"ironic-operator-controller-manager-5f4b8bd54d-v9fgj\" (UID: \"33b18ace-2da3-4bad-b093-d7db2aad7f50\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.438865 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.452551 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.482842 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.484795 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.486616 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-vk7pd" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496222 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xbx\" (UniqueName: \"kubernetes.io/projected/2b83a9b3-5579-438f-8f65-effa382b726c-kube-api-access-c8xbx\") pod \"nova-operator-controller-manager-55bff696bd-5l9jv\" (UID: \"2b83a9b3-5579-438f-8f65-effa382b726c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496270 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nncsq\" (UniqueName: \"kubernetes.io/projected/59634caa-7fe0-49a1-98bf-dbc61a15f495-kube-api-access-nncsq\") pod \"placement-operator-controller-manager-5b964cf4cd-t4scx\" (UID: \"59634caa-7fe0-49a1-98bf-dbc61a15f495\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj4w\" (UniqueName: \"kubernetes.io/projected/a536697c-8056-4907-a09e-b23aa129435d-kube-api-access-hcj4w\") pod \"ovn-operator-controller-manager-788c46999f-mkk7j\" (UID: \"a536697c-8056-4907-a09e-b23aa129435d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496357 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496386 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5zmp\" (UniqueName: \"kubernetes.io/projected/5e6de10d-baf2-4ef4-9acf-d093ee65c4fd-kube-api-access-f5zmp\") pod \"neutron-operator-controller-manager-585dbc889-wssqz\" (UID: \"5e6de10d-baf2-4ef4-9acf-d093ee65c4fd\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496409 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nt5x\" (UniqueName: \"kubernetes.io/projected/47b128c8-46ef-422c-aabc-1220f85fef83-kube-api-access-7nt5x\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhb6\" (UniqueName: \"kubernetes.io/projected/8553945b-dfe3-4c77-bb73-dce58c6ad3ba-kube-api-access-7rhb6\") pod \"mariadb-operator-controller-manager-67bf948998-fsdvn\" (UID: \"8553945b-dfe3-4c77-bb73-dce58c6ad3ba\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496495 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqdv\" (UniqueName: \"kubernetes.io/projected/456074da-531d-471b-92d3-cb4ea156bfae-kube-api-access-2hqdv\") pod \"octavia-operator-controller-manager-6687f8d877-kndp7\" (UID: \"456074da-531d-471b-92d3-cb4ea156bfae\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496515 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchc4\" (UniqueName: \"kubernetes.io/projected/a3f6ed4d-518f-4415-9378-73fca072d431-kube-api-access-xchc4\") pod \"manila-operator-controller-manager-7dd968899f-5sgtg\" (UID: \"a3f6ed4d-518f-4415-9378-73fca072d431\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.496943 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.517030 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhb6\" (UniqueName: \"kubernetes.io/projected/8553945b-dfe3-4c77-bb73-dce58c6ad3ba-kube-api-access-7rhb6\") pod \"mariadb-operator-controller-manager-67bf948998-fsdvn\" (UID: \"8553945b-dfe3-4c77-bb73-dce58c6ad3ba\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.522414 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchc4\" (UniqueName: \"kubernetes.io/projected/a3f6ed4d-518f-4415-9378-73fca072d431-kube-api-access-xchc4\") pod \"manila-operator-controller-manager-7dd968899f-5sgtg\" (UID: \"a3f6ed4d-518f-4415-9378-73fca072d431\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.525529 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.537028 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.562610 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.563313 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.568805 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-rr2qh" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.575612 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.594085 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.600642 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqdv\" (UniqueName: \"kubernetes.io/projected/456074da-531d-471b-92d3-cb4ea156bfae-kube-api-access-2hqdv\") pod \"octavia-operator-controller-manager-6687f8d877-kndp7\" (UID: \"456074da-531d-471b-92d3-cb4ea156bfae\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.600711 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xbx\" (UniqueName: \"kubernetes.io/projected/2b83a9b3-5579-438f-8f65-effa382b726c-kube-api-access-c8xbx\") pod \"nova-operator-controller-manager-55bff696bd-5l9jv\" (UID: \"2b83a9b3-5579-438f-8f65-effa382b726c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.600768 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nncsq\" (UniqueName: \"kubernetes.io/projected/59634caa-7fe0-49a1-98bf-dbc61a15f495-kube-api-access-nncsq\") pod \"placement-operator-controller-manager-5b964cf4cd-t4scx\" (UID: \"59634caa-7fe0-49a1-98bf-dbc61a15f495\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.600871 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvx2b\" (UniqueName: \"kubernetes.io/projected/3d63764e-5f26-4a63-870a-af0e86eb5d23-kube-api-access-wvx2b\") pod \"swift-operator-controller-manager-68fc8c869-gqvgs\" (UID: \"3d63764e-5f26-4a63-870a-af0e86eb5d23\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.600925 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj4w\" (UniqueName: \"kubernetes.io/projected/a536697c-8056-4907-a09e-b23aa129435d-kube-api-access-hcj4w\") pod \"ovn-operator-controller-manager-788c46999f-mkk7j\" (UID: \"a536697c-8056-4907-a09e-b23aa129435d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.600962 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.601055 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5zmp\" (UniqueName: \"kubernetes.io/projected/5e6de10d-baf2-4ef4-9acf-d093ee65c4fd-kube-api-access-f5zmp\") pod \"neutron-operator-controller-manager-585dbc889-wssqz\" (UID: \"5e6de10d-baf2-4ef4-9acf-d093ee65c4fd\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.601200 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ntrl\" (UniqueName: \"kubernetes.io/projected/8e470db6-3785-4da2-9b83-5242d6712d6a-kube-api-access-5ntrl\") pod \"telemetry-operator-controller-manager-64b5b76f97-gqv2m\" (UID: \"8e470db6-3785-4da2-9b83-5242d6712d6a\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.601257 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nt5x\" (UniqueName: \"kubernetes.io/projected/47b128c8-46ef-422c-aabc-1220f85fef83-kube-api-access-7nt5x\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.602533 4931 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.602602 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert podName:47b128c8-46ef-422c-aabc-1220f85fef83 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:19.102576952 +0000 UTC m=+934.472487199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" (UID: "47b128c8-46ef-422c-aabc-1220f85fef83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.606541 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.630517 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj4w\" (UniqueName: \"kubernetes.io/projected/a536697c-8056-4907-a09e-b23aa129435d-kube-api-access-hcj4w\") pod \"ovn-operator-controller-manager-788c46999f-mkk7j\" (UID: \"a536697c-8056-4907-a09e-b23aa129435d\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.631123 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqdv\" (UniqueName: \"kubernetes.io/projected/456074da-531d-471b-92d3-cb4ea156bfae-kube-api-access-2hqdv\") pod \"octavia-operator-controller-manager-6687f8d877-kndp7\" (UID: \"456074da-531d-471b-92d3-cb4ea156bfae\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.636846 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5zmp\" (UniqueName: \"kubernetes.io/projected/5e6de10d-baf2-4ef4-9acf-d093ee65c4fd-kube-api-access-f5zmp\") pod \"neutron-operator-controller-manager-585dbc889-wssqz\" (UID: \"5e6de10d-baf2-4ef4-9acf-d093ee65c4fd\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.637603 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xbx\" (UniqueName: \"kubernetes.io/projected/2b83a9b3-5579-438f-8f65-effa382b726c-kube-api-access-c8xbx\") pod \"nova-operator-controller-manager-55bff696bd-5l9jv\" (UID: \"2b83a9b3-5579-438f-8f65-effa382b726c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.643993 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nncsq\" (UniqueName: \"kubernetes.io/projected/59634caa-7fe0-49a1-98bf-dbc61a15f495-kube-api-access-nncsq\") pod \"placement-operator-controller-manager-5b964cf4cd-t4scx\" (UID: \"59634caa-7fe0-49a1-98bf-dbc61a15f495\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.644899 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vqp2s"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.646464 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.649236 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vqp2s"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.650780 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-g7tvd" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.657320 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nt5x\" (UniqueName: \"kubernetes.io/projected/47b128c8-46ef-422c-aabc-1220f85fef83-kube-api-access-7nt5x\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.663394 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.706137 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvx2b\" (UniqueName: \"kubernetes.io/projected/3d63764e-5f26-4a63-870a-af0e86eb5d23-kube-api-access-wvx2b\") pod \"swift-operator-controller-manager-68fc8c869-gqvgs\" (UID: \"3d63764e-5f26-4a63-870a-af0e86eb5d23\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.706206 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ntrl\" (UniqueName: \"kubernetes.io/projected/8e470db6-3785-4da2-9b83-5242d6712d6a-kube-api-access-5ntrl\") pod \"telemetry-operator-controller-manager-64b5b76f97-gqv2m\" (UID: \"8e470db6-3785-4da2-9b83-5242d6712d6a\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.706274 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9h8k\" (UniqueName: \"kubernetes.io/projected/9e5eb1e9-111a-4230-92d6-5b1fbc332ada-kube-api-access-k9h8k\") pod \"test-operator-controller-manager-56f8bfcd9f-vccxr\" (UID: \"9e5eb1e9-111a-4230-92d6-5b1fbc332ada\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.724588 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvx2b\" (UniqueName: \"kubernetes.io/projected/3d63764e-5f26-4a63-870a-af0e86eb5d23-kube-api-access-wvx2b\") pod \"swift-operator-controller-manager-68fc8c869-gqvgs\" (UID: \"3d63764e-5f26-4a63-870a-af0e86eb5d23\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.728804 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.729686 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ntrl\" (UniqueName: \"kubernetes.io/projected/8e470db6-3785-4da2-9b83-5242d6712d6a-kube-api-access-5ntrl\") pod \"telemetry-operator-controller-manager-64b5b76f97-gqv2m\" (UID: \"8e470db6-3785-4da2-9b83-5242d6712d6a\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.745156 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.764894 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.765698 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.776273 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.776352 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.776898 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sxl8b" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.783024 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.786816 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.799889 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.807123 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2f7b\" (UniqueName: \"kubernetes.io/projected/6d92f2e0-367c-428a-bcd5-cf6e5846046f-kube-api-access-h2f7b\") pod \"watcher-operator-controller-manager-564965969-vqp2s\" (UID: \"6d92f2e0-367c-428a-bcd5-cf6e5846046f\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.807299 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9h8k\" (UniqueName: \"kubernetes.io/projected/9e5eb1e9-111a-4230-92d6-5b1fbc332ada-kube-api-access-k9h8k\") pod \"test-operator-controller-manager-56f8bfcd9f-vccxr\" (UID: \"9e5eb1e9-111a-4230-92d6-5b1fbc332ada\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.812662 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.829498 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9h8k\" (UniqueName: \"kubernetes.io/projected/9e5eb1e9-111a-4230-92d6-5b1fbc332ada-kube-api-access-k9h8k\") pod \"test-operator-controller-manager-56f8bfcd9f-vccxr\" (UID: \"9e5eb1e9-111a-4230-92d6-5b1fbc332ada\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.831830 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.853440 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.854817 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.863802 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-s952l" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.868953 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.893342 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.899361 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z"] Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.909470 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.909511 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.909572 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2f7b\" (UniqueName: \"kubernetes.io/projected/6d92f2e0-367c-428a-bcd5-cf6e5846046f-kube-api-access-h2f7b\") pod \"watcher-operator-controller-manager-564965969-vqp2s\" (UID: \"6d92f2e0-367c-428a-bcd5-cf6e5846046f\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.909622 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.909640 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsnh2\" (UniqueName: \"kubernetes.io/projected/5852e12a-376e-420f-a0fd-efecae7ef623-kube-api-access-qsnh2\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.909761 4931 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:18 crc kubenswrapper[4931]: E0130 05:23:18.909805 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert podName:29ae7a52-ff32-4f97-8f6c-830ac4e4b40b nodeName:}" failed. No retries permitted until 2026-01-30 05:23:19.909790859 +0000 UTC m=+935.279701116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert") pod "infra-operator-controller-manager-79955696d6-tzxqv" (UID: "29ae7a52-ff32-4f97-8f6c-830ac4e4b40b") : secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:18 crc kubenswrapper[4931]: I0130 05:23:18.939459 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2f7b\" (UniqueName: \"kubernetes.io/projected/6d92f2e0-367c-428a-bcd5-cf6e5846046f-kube-api-access-h2f7b\") pod \"watcher-operator-controller-manager-564965969-vqp2s\" (UID: \"6d92f2e0-367c-428a-bcd5-cf6e5846046f\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:18.999857 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:18.999873 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.010618 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.010651 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsnh2\" (UniqueName: \"kubernetes.io/projected/5852e12a-376e-420f-a0fd-efecae7ef623-kube-api-access-qsnh2\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.010686 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.010746 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvj6\" (UniqueName: \"kubernetes.io/projected/ad890bc5-5b72-4833-86d5-2c022cd87e4a-kube-api-access-clvj6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v4vnz\" (UID: \"ad890bc5-5b72-4833-86d5-2c022cd87e4a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.010875 4931 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.010921 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:19.51090488 +0000 UTC m=+934.880815137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "metrics-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.011151 4931 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.011209 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:19.511191588 +0000 UTC m=+934.881101835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "webhook-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.030858 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.033807 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsnh2\" (UniqueName: \"kubernetes.io/projected/5852e12a-376e-420f-a0fd-efecae7ef623-kube-api-access-qsnh2\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.113248 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvj6\" (UniqueName: \"kubernetes.io/projected/ad890bc5-5b72-4833-86d5-2c022cd87e4a-kube-api-access-clvj6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v4vnz\" (UID: \"ad890bc5-5b72-4833-86d5-2c022cd87e4a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.113411 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.113731 4931 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.113825 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert podName:47b128c8-46ef-422c-aabc-1220f85fef83 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:20.113762398 +0000 UTC m=+935.483672655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" (UID: "47b128c8-46ef-422c-aabc-1220f85fef83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.133119 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.141667 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvj6\" (UniqueName: \"kubernetes.io/projected/ad890bc5-5b72-4833-86d5-2c022cd87e4a-kube-api-access-clvj6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v4vnz\" (UID: \"ad890bc5-5b72-4833-86d5-2c022cd87e4a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.145458 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.209987 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.335668 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.339930 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.370852 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2"] Jan 30 05:23:19 crc kubenswrapper[4931]: W0130 05:23:19.374916 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd806e5bf_8346_46c0_a3de_5f8412e92b4f.slice/crio-79c2b361d544854d19894ef245ced794b18dd59b97294328a5498b546fff9d74 WatchSource:0}: Error finding container 79c2b361d544854d19894ef245ced794b18dd59b97294328a5498b546fff9d74: Status 404 returned error can't find the container with id 79c2b361d544854d19894ef245ced794b18dd59b97294328a5498b546fff9d74 Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.533119 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.533172 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.533281 4931 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.533311 4931 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.533336 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:20.533323796 +0000 UTC m=+935.903234053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "webhook-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.533383 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:20.533364997 +0000 UTC m=+935.903275244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "metrics-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.568383 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" event={"ID":"cc5025a4-0807-478d-831a-c6ed424628a9","Type":"ContainerStarted","Data":"06b2923c961bf0d4ee316001972de932f77d28df5f63fb55ba281f4acbd3edd5"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.570260 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" event={"ID":"ce7feb31-22f3-42d9-83b1-cd9155abae99","Type":"ContainerStarted","Data":"699d1979a4e6c22e40e2958d0727b8a813afa0c8c42fb39279acb007eef3702b"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.571841 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" event={"ID":"d806e5bf-8346-46c0-a3de-5f8412e92b4f","Type":"ContainerStarted","Data":"79c2b361d544854d19894ef245ced794b18dd59b97294328a5498b546fff9d74"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.573078 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" event={"ID":"33b18ace-2da3-4bad-b093-d7db2aad7f50","Type":"ContainerStarted","Data":"516b01dca3eca9c8675ba023cbc5f5231817840726a7660b277587917f25fdc3"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.574124 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" event={"ID":"eb76dd84-30db-4769-852c-9a42814949d7","Type":"ContainerStarted","Data":"14f29a0f74f382c91558a5ade188b393f9bed0c831bc5c494f955e657bf8d4ea"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.576190 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" event={"ID":"2773429e-ccbb-43a4-a88a-a1cd41a63e10","Type":"ContainerStarted","Data":"07d7a793d68db451ddcc1bf5a4730ad681e0ad391b3fa174b65490081056f1d5"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.577546 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" event={"ID":"dea1ae69-0c15-4228-a323-dc6f762e3c82","Type":"ContainerStarted","Data":"f1884e6ab103cea1bba8d73c115d3edba1f8b0dbeeae34f1637b34025862f1ee"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.578311 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" event={"ID":"80b25db7-e1c2-4787-89f4-952cd7e845ba","Type":"ContainerStarted","Data":"9eb6821fa91ed10a1bd3b567ceeda4f00173041848682e4c89c5ffbbd889e058"} Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.736933 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.744545 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.749785 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.756689 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.777899 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.785914 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.802979 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv"] Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.810872 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hcj4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-mkk7j_openstack-operators(a536697c-8056-4907-a09e-b23aa129435d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.811301 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2f7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-vqp2s_openstack-operators(6d92f2e0-367c-428a-bcd5-cf6e5846046f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.812391 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" podUID="a536697c-8056-4907-a09e-b23aa129435d" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.812399 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" podUID="6d92f2e0-367c-428a-bcd5-cf6e5846046f" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.829519 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-vqp2s"] Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.833237 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-clvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v4vnz_openstack-operators(ad890bc5-5b72-4833-86d5-2c022cd87e4a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.834908 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" podUID="ad890bc5-5b72-4833-86d5-2c022cd87e4a" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.834985 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz"] Jan 30 05:23:19 crc kubenswrapper[4931]: W0130 05:23:19.840540 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d63764e_5f26_4a63_870a_af0e86eb5d23.slice/crio-1e3ea5b7e3a1c069f4d22ce87c1264356e67a543d3ab638dbbe5e61a5c08e1d1 WatchSource:0}: Error finding container 1e3ea5b7e3a1c069f4d22ce87c1264356e67a543d3ab638dbbe5e61a5c08e1d1: Status 404 returned error can't find the container with id 1e3ea5b7e3a1c069f4d22ce87c1264356e67a543d3ab638dbbe5e61a5c08e1d1 Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.840689 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hqdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-kndp7_openstack-operators(456074da-531d-471b-92d3-cb4ea156bfae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.841791 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" podUID="456074da-531d-471b-92d3-cb4ea156bfae" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.841963 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvx2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-gqvgs_openstack-operators(3d63764e-5f26-4a63-870a-af0e86eb5d23): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 05:23:19 crc kubenswrapper[4931]: W0130 05:23:19.842311 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e5eb1e9_111a_4230_92d6_5b1fbc332ada.slice/crio-6daadfb949bcd6080310d50bcb3aeaa7d05d66803d1bb9a2b60ca3c9b04244c3 WatchSource:0}: Error finding container 6daadfb949bcd6080310d50bcb3aeaa7d05d66803d1bb9a2b60ca3c9b04244c3: Status 404 returned error can't find the container with id 6daadfb949bcd6080310d50bcb3aeaa7d05d66803d1bb9a2b60ca3c9b04244c3 Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.842744 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr"] Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.843414 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" podUID="3d63764e-5f26-4a63-870a-af0e86eb5d23" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.844798 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k9h8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-vccxr_openstack-operators(9e5eb1e9-111a-4230-92d6-5b1fbc332ada): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.846050 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" podUID="9e5eb1e9-111a-4230-92d6-5b1fbc332ada" Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.849586 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.855055 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs"] Jan 30 05:23:19 crc kubenswrapper[4931]: I0130 05:23:19.945183 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.945372 4931 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:19 crc kubenswrapper[4931]: E0130 05:23:19.945478 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert podName:29ae7a52-ff32-4f97-8f6c-830ac4e4b40b nodeName:}" failed. No retries permitted until 2026-01-30 05:23:21.945452199 +0000 UTC m=+937.315362516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert") pod "infra-operator-controller-manager-79955696d6-tzxqv" (UID: "29ae7a52-ff32-4f97-8f6c-830ac4e4b40b") : secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.146409 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.146575 4931 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.146628 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert podName:47b128c8-46ef-422c-aabc-1220f85fef83 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:22.1466076 +0000 UTC m=+937.516517857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" (UID: "47b128c8-46ef-422c-aabc-1220f85fef83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.552248 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.552302 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.552443 4931 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.552483 4931 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.552506 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:22.552492341 +0000 UTC m=+937.922402598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "webhook-server-cert" not found Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.552592 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:22.552567963 +0000 UTC m=+937.922478320 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "metrics-server-cert" not found Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.605708 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" event={"ID":"5e6de10d-baf2-4ef4-9acf-d093ee65c4fd","Type":"ContainerStarted","Data":"06ac2eca3df17fbdc2e9847bff81089b0dfd6b12a02f6a6fbd704819801c9c82"} Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.608047 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" event={"ID":"3d63764e-5f26-4a63-870a-af0e86eb5d23","Type":"ContainerStarted","Data":"1e3ea5b7e3a1c069f4d22ce87c1264356e67a543d3ab638dbbe5e61a5c08e1d1"} Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.611261 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" event={"ID":"59634caa-7fe0-49a1-98bf-dbc61a15f495","Type":"ContainerStarted","Data":"a85556acf5fd5c5bb24304c0807640e47defb000e0b39b263c48f504303b1dc1"} Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.612107 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" podUID="3d63764e-5f26-4a63-870a-af0e86eb5d23" Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.612811 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" event={"ID":"9e5eb1e9-111a-4230-92d6-5b1fbc332ada","Type":"ContainerStarted","Data":"6daadfb949bcd6080310d50bcb3aeaa7d05d66803d1bb9a2b60ca3c9b04244c3"} Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.614023 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" podUID="9e5eb1e9-111a-4230-92d6-5b1fbc332ada" Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.614937 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" event={"ID":"ad890bc5-5b72-4833-86d5-2c022cd87e4a","Type":"ContainerStarted","Data":"ff84e7ded79401d6cc907dd06658e56ce815b51b4f7e0f258d0c089befc32f1f"} Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.617302 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" podUID="ad890bc5-5b72-4833-86d5-2c022cd87e4a" Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.628728 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" event={"ID":"8e470db6-3785-4da2-9b83-5242d6712d6a","Type":"ContainerStarted","Data":"4d1529c3cd06cef7c859cdbded1996e56a11e05d299b8ac71ceae26922cf7c8c"} Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.632396 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" event={"ID":"6d92f2e0-367c-428a-bcd5-cf6e5846046f","Type":"ContainerStarted","Data":"589ce9e6eb6354776adbacc8ea36e2442305cc2b39239273b6f66d743190e980"} Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.633606 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" podUID="6d92f2e0-367c-428a-bcd5-cf6e5846046f" Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.636211 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" event={"ID":"a3f6ed4d-518f-4415-9378-73fca072d431","Type":"ContainerStarted","Data":"87382b6a8b2e9985466b0f3911466583658fa03b4a291a32bd38a3b33905df61"} Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.637616 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" event={"ID":"456074da-531d-471b-92d3-cb4ea156bfae","Type":"ContainerStarted","Data":"4d829eec5ce9b3580ee3f4dac0b7b12a402cbdd7aae399082b6f52677c6c296b"} Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.639509 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" podUID="456074da-531d-471b-92d3-cb4ea156bfae" Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.640038 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" event={"ID":"a536697c-8056-4907-a09e-b23aa129435d","Type":"ContainerStarted","Data":"9dd8b5ea4a528dbe0f4ef4b500a30d4f620becea5d8059f124470198e50517bf"} Jan 30 05:23:20 crc kubenswrapper[4931]: E0130 05:23:20.644267 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" podUID="a536697c-8056-4907-a09e-b23aa129435d" Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.645956 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" event={"ID":"8553945b-dfe3-4c77-bb73-dce58c6ad3ba","Type":"ContainerStarted","Data":"31e5cb4b3d4087f00cd1b494b56b823920b79283e25c58e355b3d380020832c3"} Jan 30 05:23:20 crc kubenswrapper[4931]: I0130 05:23:20.648088 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" event={"ID":"2b83a9b3-5579-438f-8f65-effa382b726c","Type":"ContainerStarted","Data":"db4b8f222693a8b5dce73a0b7bfb48f1466a4469dcc9166f10b46355eec50f5c"} Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.657110 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" podUID="6d92f2e0-367c-428a-bcd5-cf6e5846046f" Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.658548 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" podUID="456074da-531d-471b-92d3-cb4ea156bfae" Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.658789 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" podUID="ad890bc5-5b72-4833-86d5-2c022cd87e4a" Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.658807 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" podUID="9e5eb1e9-111a-4230-92d6-5b1fbc332ada" Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.658830 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" podUID="a536697c-8056-4907-a09e-b23aa129435d" Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.659049 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" podUID="3d63764e-5f26-4a63-870a-af0e86eb5d23" Jan 30 05:23:21 crc kubenswrapper[4931]: I0130 05:23:21.967152 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.967282 4931 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:21 crc kubenswrapper[4931]: E0130 05:23:21.967359 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert podName:29ae7a52-ff32-4f97-8f6c-830ac4e4b40b nodeName:}" failed. No retries permitted until 2026-01-30 05:23:25.967332207 +0000 UTC m=+941.337242464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert") pod "infra-operator-controller-manager-79955696d6-tzxqv" (UID: "29ae7a52-ff32-4f97-8f6c-830ac4e4b40b") : secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:22 crc kubenswrapper[4931]: I0130 05:23:22.169575 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:22 crc kubenswrapper[4931]: E0130 05:23:22.169701 4931 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:22 crc kubenswrapper[4931]: E0130 05:23:22.169765 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert podName:47b128c8-46ef-422c-aabc-1220f85fef83 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:26.169748683 +0000 UTC m=+941.539658930 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" (UID: "47b128c8-46ef-422c-aabc-1220f85fef83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:22 crc kubenswrapper[4931]: I0130 05:23:22.576949 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:22 crc kubenswrapper[4931]: I0130 05:23:22.577039 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:22 crc kubenswrapper[4931]: E0130 05:23:22.577107 4931 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 05:23:22 crc kubenswrapper[4931]: E0130 05:23:22.577162 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:26.577146126 +0000 UTC m=+941.947056383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "metrics-server-cert" not found Jan 30 05:23:22 crc kubenswrapper[4931]: E0130 05:23:22.577480 4931 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 05:23:22 crc kubenswrapper[4931]: E0130 05:23:22.577551 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:26.577529206 +0000 UTC m=+941.947439463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "webhook-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: I0130 05:23:26.025343 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.025529 4931 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.025943 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert podName:29ae7a52-ff32-4f97-8f6c-830ac4e4b40b nodeName:}" failed. No retries permitted until 2026-01-30 05:23:34.025927162 +0000 UTC m=+949.395837409 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert") pod "infra-operator-controller-manager-79955696d6-tzxqv" (UID: "29ae7a52-ff32-4f97-8f6c-830ac4e4b40b") : secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: I0130 05:23:26.228226 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.228489 4931 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.228624 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert podName:47b128c8-46ef-422c-aabc-1220f85fef83 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:34.228579535 +0000 UTC m=+949.598489802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" (UID: "47b128c8-46ef-422c-aabc-1220f85fef83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: I0130 05:23:26.634329 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:26 crc kubenswrapper[4931]: I0130 05:23:26.634405 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.634557 4931 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.634557 4931 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.634618 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:34.63460181 +0000 UTC m=+950.004512077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "metrics-server-cert" not found Jan 30 05:23:26 crc kubenswrapper[4931]: E0130 05:23:26.634635 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:34.634627851 +0000 UTC m=+950.004538118 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "webhook-server-cert" not found Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.363303 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.363361 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.364261 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.364751 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60aa2b4543ca7f8bb21bef3c167f0da099829d6ddc544f185e09f8c4de74ad75"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.364810 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://60aa2b4543ca7f8bb21bef3c167f0da099829d6ddc544f185e09f8c4de74ad75" gracePeriod=600 Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.721596 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="60aa2b4543ca7f8bb21bef3c167f0da099829d6ddc544f185e09f8c4de74ad75" exitCode=0 Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.721637 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"60aa2b4543ca7f8bb21bef3c167f0da099829d6ddc544f185e09f8c4de74ad75"} Jan 30 05:23:27 crc kubenswrapper[4931]: I0130 05:23:27.721667 4931 scope.go:117] "RemoveContainer" containerID="1794ca6ffdd404c39dffe9fa048526a3a78869de00d876a52d3bd280c8bbc2a2" Jan 30 05:23:33 crc kubenswrapper[4931]: E0130 05:23:33.153046 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 30 05:23:33 crc kubenswrapper[4931]: E0130 05:23:33.153956 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2qq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-v9fgj_openstack-operators(33b18ace-2da3-4bad-b093-d7db2aad7f50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 05:23:33 crc kubenswrapper[4931]: E0130 05:23:33.155243 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" podUID="33b18ace-2da3-4bad-b093-d7db2aad7f50" Jan 30 05:23:33 crc kubenswrapper[4931]: E0130 05:23:33.782655 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" podUID="33b18ace-2da3-4bad-b093-d7db2aad7f50" Jan 30 05:23:34 crc kubenswrapper[4931]: I0130 05:23:34.040931 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.041100 4931 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.041169 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert podName:29ae7a52-ff32-4f97-8f6c-830ac4e4b40b nodeName:}" failed. No retries permitted until 2026-01-30 05:23:50.04115028 +0000 UTC m=+965.411060537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert") pod "infra-operator-controller-manager-79955696d6-tzxqv" (UID: "29ae7a52-ff32-4f97-8f6c-830ac4e4b40b") : secret "infra-operator-webhook-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.082724 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.082885 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjqxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-bf56z_openstack-operators(dea1ae69-0c15-4228-a323-dc6f762e3c82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.084485 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" podUID="dea1ae69-0c15-4228-a323-dc6f762e3c82" Jan 30 05:23:34 crc kubenswrapper[4931]: I0130 05:23:34.244293 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.244508 4931 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.244562 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert podName:47b128c8-46ef-422c-aabc-1220f85fef83 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:50.244544233 +0000 UTC m=+965.614454500 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" (UID: "47b128c8-46ef-422c-aabc-1220f85fef83") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: I0130 05:23:34.650175 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:34 crc kubenswrapper[4931]: I0130 05:23:34.650230 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.650365 4931 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.650414 4931 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.650454 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:50.650434805 +0000 UTC m=+966.020345062 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "metrics-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.650494 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs podName:5852e12a-376e-420f-a0fd-efecae7ef623 nodeName:}" failed. No retries permitted until 2026-01-30 05:23:50.650477266 +0000 UTC m=+966.020387523 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-qpp9f" (UID: "5852e12a-376e-420f-a0fd-efecae7ef623") : secret "webhook-server-cert" not found Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.748142 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.748282 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xchc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-5sgtg_openstack-operators(a3f6ed4d-518f-4415-9378-73fca072d431): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.749384 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" podUID="a3f6ed4d-518f-4415-9378-73fca072d431" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.795525 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" podUID="a3f6ed4d-518f-4415-9378-73fca072d431" Jan 30 05:23:34 crc kubenswrapper[4931]: E0130 05:23:34.796239 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" podUID="dea1ae69-0c15-4228-a323-dc6f762e3c82" Jan 30 05:23:36 crc kubenswrapper[4931]: I0130 05:23:36.805769 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f"} Jan 30 05:23:37 crc kubenswrapper[4931]: I0130 05:23:37.830063 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" event={"ID":"ce7feb31-22f3-42d9-83b1-cd9155abae99","Type":"ContainerStarted","Data":"d3cab9b2e4ee8963cdfd7db313c28e5e14abdfed3fdc74d199ad0622b444578d"} Jan 30 05:23:37 crc kubenswrapper[4931]: I0130 05:23:37.830566 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" Jan 30 05:23:37 crc kubenswrapper[4931]: I0130 05:23:37.835995 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" event={"ID":"80b25db7-e1c2-4787-89f4-952cd7e845ba","Type":"ContainerStarted","Data":"c44314702bfc41dc7a507bed53fae47af3f510a1255238b7d83deebd6b131685"} Jan 30 05:23:37 crc kubenswrapper[4931]: I0130 05:23:37.836032 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" Jan 30 05:23:37 crc kubenswrapper[4931]: I0130 05:23:37.850321 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" podStartSLOduration=4.217877207 podStartE2EDuration="19.850302556s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.183782714 +0000 UTC m=+934.553692971" lastFinishedPulling="2026-01-30 05:23:34.816208043 +0000 UTC m=+950.186118320" observedRunningTime="2026-01-30 05:23:37.843947682 +0000 UTC m=+953.213857969" watchObservedRunningTime="2026-01-30 05:23:37.850302556 +0000 UTC m=+953.220212813" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.843239 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" event={"ID":"eb76dd84-30db-4769-852c-9a42814949d7","Type":"ContainerStarted","Data":"ac1b2a0a43d20788ee404a395fdde4cb810deb70c9ea522720b066fa608a6a98"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.844355 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.853732 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" event={"ID":"2b83a9b3-5579-438f-8f65-effa382b726c","Type":"ContainerStarted","Data":"fc065fa52f3f82077a3221e1fa1735b74f16d593029388265c4a1a7c9c574370"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.854250 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.860983 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" event={"ID":"cc5025a4-0807-478d-831a-c6ed424628a9","Type":"ContainerStarted","Data":"7d8ecc6ac1fdf3e3641188b711a0755410e43838332f7877df696903d674e460"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.861526 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.863991 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" event={"ID":"3d63764e-5f26-4a63-870a-af0e86eb5d23","Type":"ContainerStarted","Data":"da1b433065e2e87a24641bff526ae9b9c0f32f0f87e7653bcf885cb86419ad0f"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.864318 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.867780 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" event={"ID":"9e5eb1e9-111a-4230-92d6-5b1fbc332ada","Type":"ContainerStarted","Data":"480d7fd833d0af0b1986b5fd062806d3f39ea5d40c5b0efbe29fce5e12054cdf"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.868128 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.869100 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" event={"ID":"d806e5bf-8346-46c0-a3de-5f8412e92b4f","Type":"ContainerStarted","Data":"32f9648fa8035ad706f5ef17b8b74cce10cfdd6e508bf4cdc19932320e5474d7"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.869438 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.870264 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" event={"ID":"2773429e-ccbb-43a4-a88a-a1cd41a63e10","Type":"ContainerStarted","Data":"5d48d2c9aa3ee2fd32b371415f8f3fcad14bd90c4ff9cbd0fdb0019886b9a8bf"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.870615 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.874152 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" event={"ID":"456074da-531d-471b-92d3-cb4ea156bfae","Type":"ContainerStarted","Data":"79be38d24d378e06b783fdd24e85fb505677806404cd6bc57aa6140912680645"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.874497 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.875372 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" event={"ID":"a536697c-8056-4907-a09e-b23aa129435d","Type":"ContainerStarted","Data":"aed51bac6c8458a9f0f204ccad30e73cc8979986750b94b3495be05b963d785c"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.875703 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.876520 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" event={"ID":"59634caa-7fe0-49a1-98bf-dbc61a15f495","Type":"ContainerStarted","Data":"fe49d6efca633ced99c67e5797d6f7ba6a765a838b713eb01e9c10374fda81e0"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.876843 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.879475 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" event={"ID":"8553945b-dfe3-4c77-bb73-dce58c6ad3ba","Type":"ContainerStarted","Data":"92b2144b7f36ac97060b832e1eebaa80170871d3e428616f6020807babefccd0"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.879695 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.883510 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" event={"ID":"8e470db6-3785-4da2-9b83-5242d6712d6a","Type":"ContainerStarted","Data":"c9416a8be147da5a30fb922010687ce898274467c6bbfe71fc12e6ddbcbe95a9"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.883852 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.892752 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" event={"ID":"6d92f2e0-367c-428a-bcd5-cf6e5846046f","Type":"ContainerStarted","Data":"5fff86b6b08566970d03a7221c38f5ec2ae15bd27ff24102681d4c55d71bad36"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.893610 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.912491 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" event={"ID":"5e6de10d-baf2-4ef4-9acf-d093ee65c4fd","Type":"ContainerStarted","Data":"5f406d7a163233efebbdaaa4b40595ee6c44d87c16232dff8db32617a4297178"} Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.912528 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" Jan 30 05:23:38 crc kubenswrapper[4931]: I0130 05:23:38.928436 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" podStartSLOduration=5.132896858 podStartE2EDuration="20.928405562s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:18.951015443 +0000 UTC m=+934.320925700" lastFinishedPulling="2026-01-30 05:23:34.746524147 +0000 UTC m=+950.116434404" observedRunningTime="2026-01-30 05:23:37.863929451 +0000 UTC m=+953.233839708" watchObservedRunningTime="2026-01-30 05:23:38.928405562 +0000 UTC m=+954.298315819" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.023695 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" podStartSLOduration=4.801171607 podStartE2EDuration="21.023675282s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.049022888 +0000 UTC m=+934.418933145" lastFinishedPulling="2026-01-30 05:23:35.271526563 +0000 UTC m=+950.641436820" observedRunningTime="2026-01-30 05:23:38.931691432 +0000 UTC m=+954.301601689" watchObservedRunningTime="2026-01-30 05:23:39.023675282 +0000 UTC m=+954.393585529" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.026841 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" podStartSLOduration=4.793139496 podStartE2EDuration="21.026832179s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.793709476 +0000 UTC m=+935.163619733" lastFinishedPulling="2026-01-30 05:23:36.027402159 +0000 UTC m=+951.397312416" observedRunningTime="2026-01-30 05:23:39.013533513 +0000 UTC m=+954.383443770" watchObservedRunningTime="2026-01-30 05:23:39.026832179 +0000 UTC m=+954.396742436" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.081281 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" podStartSLOduration=5.169252098 podStartE2EDuration="21.081260935s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.359400793 +0000 UTC m=+934.729311050" lastFinishedPulling="2026-01-30 05:23:35.27140962 +0000 UTC m=+950.641319887" observedRunningTime="2026-01-30 05:23:39.071896568 +0000 UTC m=+954.441806825" watchObservedRunningTime="2026-01-30 05:23:39.081260935 +0000 UTC m=+954.451171192" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.103689 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" podStartSLOduration=5.5305280020000005 podStartE2EDuration="21.103671642s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.173408748 +0000 UTC m=+934.543319005" lastFinishedPulling="2026-01-30 05:23:34.746552388 +0000 UTC m=+950.116462645" observedRunningTime="2026-01-30 05:23:39.099757794 +0000 UTC m=+954.469668051" watchObservedRunningTime="2026-01-30 05:23:39.103671642 +0000 UTC m=+954.473581899" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.187581 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" podStartSLOduration=6.165518154 podStartE2EDuration="21.187567039s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.793558732 +0000 UTC m=+935.163468989" lastFinishedPulling="2026-01-30 05:23:34.815607617 +0000 UTC m=+950.185517874" observedRunningTime="2026-01-30 05:23:39.181378719 +0000 UTC m=+954.551288976" watchObservedRunningTime="2026-01-30 05:23:39.187567039 +0000 UTC m=+954.557477296" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.234380 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" podStartSLOduration=5.773483733 podStartE2EDuration="21.234360205s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.810534118 +0000 UTC m=+935.180444375" lastFinishedPulling="2026-01-30 05:23:35.27141059 +0000 UTC m=+950.641320847" observedRunningTime="2026-01-30 05:23:39.231882387 +0000 UTC m=+954.601792644" watchObservedRunningTime="2026-01-30 05:23:39.234360205 +0000 UTC m=+954.604270462" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.276751 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" podStartSLOduration=6.260233388 podStartE2EDuration="21.276734331s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.794062685 +0000 UTC m=+935.163972942" lastFinishedPulling="2026-01-30 05:23:34.810563628 +0000 UTC m=+950.180473885" observedRunningTime="2026-01-30 05:23:39.276042442 +0000 UTC m=+954.645952689" watchObservedRunningTime="2026-01-30 05:23:39.276734331 +0000 UTC m=+954.646644588" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.335041 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" podStartSLOduration=3.770093214 podStartE2EDuration="21.335020174s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.84185956 +0000 UTC m=+935.211769817" lastFinishedPulling="2026-01-30 05:23:37.40678653 +0000 UTC m=+952.776696777" observedRunningTime="2026-01-30 05:23:39.333878662 +0000 UTC m=+954.703788919" watchObservedRunningTime="2026-01-30 05:23:39.335020174 +0000 UTC m=+954.704930431" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.338844 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" podStartSLOduration=3.7415599779999997 podStartE2EDuration="21.338832918s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.810763185 +0000 UTC m=+935.180673442" lastFinishedPulling="2026-01-30 05:23:37.408036125 +0000 UTC m=+952.777946382" observedRunningTime="2026-01-30 05:23:39.304757451 +0000 UTC m=+954.674667738" watchObservedRunningTime="2026-01-30 05:23:39.338832918 +0000 UTC m=+954.708743175" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.389592 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" podStartSLOduration=3.828140469 podStartE2EDuration="21.389574244s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.844711308 +0000 UTC m=+935.214621565" lastFinishedPulling="2026-01-30 05:23:37.406145083 +0000 UTC m=+952.776055340" observedRunningTime="2026-01-30 05:23:39.384825253 +0000 UTC m=+954.754735510" watchObservedRunningTime="2026-01-30 05:23:39.389574244 +0000 UTC m=+954.759484501" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.439311 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" podStartSLOduration=6.42317841 podStartE2EDuration="21.439289661s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.793813899 +0000 UTC m=+935.163724156" lastFinishedPulling="2026-01-30 05:23:34.80992515 +0000 UTC m=+950.179835407" observedRunningTime="2026-01-30 05:23:39.431031774 +0000 UTC m=+954.800942031" watchObservedRunningTime="2026-01-30 05:23:39.439289661 +0000 UTC m=+954.809199918" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.461884 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" podStartSLOduration=2.917636312 podStartE2EDuration="21.461866732s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.840550264 +0000 UTC m=+935.210460521" lastFinishedPulling="2026-01-30 05:23:38.384780684 +0000 UTC m=+953.754690941" observedRunningTime="2026-01-30 05:23:39.45999568 +0000 UTC m=+954.829905937" watchObservedRunningTime="2026-01-30 05:23:39.461866732 +0000 UTC m=+954.831776989" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.496294 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" podStartSLOduration=5.603526871 podStartE2EDuration="21.496276828s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.378613241 +0000 UTC m=+934.748523498" lastFinishedPulling="2026-01-30 05:23:35.271363198 +0000 UTC m=+950.641273455" observedRunningTime="2026-01-30 05:23:39.491098206 +0000 UTC m=+954.861008463" watchObservedRunningTime="2026-01-30 05:23:39.496276828 +0000 UTC m=+954.866187075" Jan 30 05:23:39 crc kubenswrapper[4931]: I0130 05:23:39.529896 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" podStartSLOduration=2.998925867 podStartE2EDuration="21.529881272s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.811196997 +0000 UTC m=+935.181107254" lastFinishedPulling="2026-01-30 05:23:38.342152402 +0000 UTC m=+953.712062659" observedRunningTime="2026-01-30 05:23:39.528116523 +0000 UTC m=+954.898026780" watchObservedRunningTime="2026-01-30 05:23:39.529881272 +0000 UTC m=+954.899791529" Jan 30 05:23:42 crc kubenswrapper[4931]: I0130 05:23:42.946543 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" event={"ID":"ad890bc5-5b72-4833-86d5-2c022cd87e4a","Type":"ContainerStarted","Data":"cac9d9cf02537d9f30af2fd8be71b7f5f9ec7eb7b38a9f7bfef1be1aab977885"} Jan 30 05:23:42 crc kubenswrapper[4931]: I0130 05:23:42.971202 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v4vnz" podStartSLOduration=2.997967851 podStartE2EDuration="24.971176723s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.833071488 +0000 UTC m=+935.202981745" lastFinishedPulling="2026-01-30 05:23:41.80628036 +0000 UTC m=+957.176190617" observedRunningTime="2026-01-30 05:23:42.966579456 +0000 UTC m=+958.336489773" watchObservedRunningTime="2026-01-30 05:23:42.971176723 +0000 UTC m=+958.341087010" Jan 30 05:23:47 crc kubenswrapper[4931]: I0130 05:23:47.998772 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" event={"ID":"dea1ae69-0c15-4228-a323-dc6f762e3c82","Type":"ContainerStarted","Data":"74e87ffff406463ee0865f7b17c2812c2fd93121443f274041a32e947224d69f"} Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:47.999796 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.028697 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" podStartSLOduration=2.158455816 podStartE2EDuration="30.028673317s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.077561763 +0000 UTC m=+934.447472020" lastFinishedPulling="2026-01-30 05:23:46.947779224 +0000 UTC m=+962.317689521" observedRunningTime="2026-01-30 05:23:48.024307247 +0000 UTC m=+963.394217554" watchObservedRunningTime="2026-01-30 05:23:48.028673317 +0000 UTC m=+963.398583584" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.358302 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mttxk" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.384722 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-4wv6z" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.405115 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-nsn26" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.432164 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lmgq2" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.457304 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-l5dv2" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.544669 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-ddtbw" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.601244 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-fsdvn" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.665878 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-wssqz" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.735003 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5l9jv" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.747241 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-kndp7" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.785005 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mkk7j" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.802353 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-t4scx" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.818640 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-gqvgs" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.835315 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-gqv2m" Jan 30 05:23:48 crc kubenswrapper[4931]: I0130 05:23:48.896242 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-vccxr" Jan 30 05:23:49 crc kubenswrapper[4931]: I0130 05:23:49.003218 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-vqp2s" Jan 30 05:23:49 crc kubenswrapper[4931]: I0130 05:23:49.007501 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" event={"ID":"a3f6ed4d-518f-4415-9378-73fca072d431","Type":"ContainerStarted","Data":"c5b2cb4f2ceb89c9e32a2fc6e8ad2b3582da479d12e85146f2f6f17a818cad3d"} Jan 30 05:23:49 crc kubenswrapper[4931]: I0130 05:23:49.008076 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" Jan 30 05:23:49 crc kubenswrapper[4931]: I0130 05:23:49.013818 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" event={"ID":"33b18ace-2da3-4bad-b093-d7db2aad7f50","Type":"ContainerStarted","Data":"23e951c62f4ac3258f76d8365dba7c4e0d144bbf7537acc8fff1bbfe4d7f5ec0"} Jan 30 05:23:49 crc kubenswrapper[4931]: I0130 05:23:49.014063 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" Jan 30 05:23:49 crc kubenswrapper[4931]: I0130 05:23:49.045607 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" podStartSLOduration=1.987107304 podStartE2EDuration="31.045587291s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.80876808 +0000 UTC m=+935.178678337" lastFinishedPulling="2026-01-30 05:23:48.867248067 +0000 UTC m=+964.237158324" observedRunningTime="2026-01-30 05:23:49.0350049 +0000 UTC m=+964.404915177" watchObservedRunningTime="2026-01-30 05:23:49.045587291 +0000 UTC m=+964.415497548" Jan 30 05:23:49 crc kubenswrapper[4931]: I0130 05:23:49.059987 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" podStartSLOduration=2.4049755250000002 podStartE2EDuration="31.059973277s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:19.369310975 +0000 UTC m=+934.739221242" lastFinishedPulling="2026-01-30 05:23:48.024308727 +0000 UTC m=+963.394218994" observedRunningTime="2026-01-30 05:23:49.059589996 +0000 UTC m=+964.429500253" watchObservedRunningTime="2026-01-30 05:23:49.059973277 +0000 UTC m=+964.429883534" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.121220 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.130826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29ae7a52-ff32-4f97-8f6c-830ac4e4b40b-cert\") pod \"infra-operator-controller-manager-79955696d6-tzxqv\" (UID: \"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.296238 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5fpjl" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.305168 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.325549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.330604 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47b128c8-46ef-422c-aabc-1220f85fef83-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp\" (UID: \"47b128c8-46ef-422c-aabc-1220f85fef83\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.552010 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv"] Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.571059 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zl4cx" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.580348 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.736578 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.736718 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.746380 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.747013 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5852e12a-376e-420f-a0fd-efecae7ef623-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-qpp9f\" (UID: \"5852e12a-376e-420f-a0fd-efecae7ef623\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.839709 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp"] Jan 30 05:23:50 crc kubenswrapper[4931]: W0130 05:23:50.844933 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b128c8_46ef_422c_aabc_1220f85fef83.slice/crio-ecab003ed86f5685bfcb4ee8bfa058812cc32ad956a26e8a03faa87a6e7f1234 WatchSource:0}: Error finding container ecab003ed86f5685bfcb4ee8bfa058812cc32ad956a26e8a03faa87a6e7f1234: Status 404 returned error can't find the container with id ecab003ed86f5685bfcb4ee8bfa058812cc32ad956a26e8a03faa87a6e7f1234 Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.895015 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sxl8b" Jan 30 05:23:50 crc kubenswrapper[4931]: I0130 05:23:50.903690 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:51 crc kubenswrapper[4931]: I0130 05:23:51.040365 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" event={"ID":"47b128c8-46ef-422c-aabc-1220f85fef83","Type":"ContainerStarted","Data":"ecab003ed86f5685bfcb4ee8bfa058812cc32ad956a26e8a03faa87a6e7f1234"} Jan 30 05:23:51 crc kubenswrapper[4931]: I0130 05:23:51.042146 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" event={"ID":"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b","Type":"ContainerStarted","Data":"6585170c17fcbafd900d4df4f324a65b80219870b607472bc1bb47bb88b4be49"} Jan 30 05:23:51 crc kubenswrapper[4931]: I0130 05:23:51.196121 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f"] Jan 30 05:23:52 crc kubenswrapper[4931]: I0130 05:23:52.056170 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" event={"ID":"5852e12a-376e-420f-a0fd-efecae7ef623","Type":"ContainerStarted","Data":"445fe4c7045195b6564422c7068094c3df8b23bc420290d25f20db125cadefdb"} Jan 30 05:23:52 crc kubenswrapper[4931]: I0130 05:23:52.056216 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" event={"ID":"5852e12a-376e-420f-a0fd-efecae7ef623","Type":"ContainerStarted","Data":"806cf08e848406395c1aced7aa0c86ae87439f12aaa856a998612f458f4062a2"} Jan 30 05:23:52 crc kubenswrapper[4931]: I0130 05:23:52.057222 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:23:52 crc kubenswrapper[4931]: I0130 05:23:52.086731 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" podStartSLOduration=34.086717017 podStartE2EDuration="34.086717017s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:23:52.08536719 +0000 UTC m=+967.455277447" watchObservedRunningTime="2026-01-30 05:23:52.086717017 +0000 UTC m=+967.456627274" Jan 30 05:23:54 crc kubenswrapper[4931]: I0130 05:23:54.070892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" event={"ID":"29ae7a52-ff32-4f97-8f6c-830ac4e4b40b","Type":"ContainerStarted","Data":"84679140b3518d88037d90080960b88b7b8345bd0388d026da15d2a2c3c2dd76"} Jan 30 05:23:54 crc kubenswrapper[4931]: I0130 05:23:54.071314 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:23:54 crc kubenswrapper[4931]: I0130 05:23:54.074663 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" event={"ID":"47b128c8-46ef-422c-aabc-1220f85fef83","Type":"ContainerStarted","Data":"9c0d20f49a082af89977722012c4671034b35f92aea87540386161f822f2c2b6"} Jan 30 05:23:54 crc kubenswrapper[4931]: I0130 05:23:54.101151 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" podStartSLOduration=33.293509585 podStartE2EDuration="36.101131541s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:50.55439225 +0000 UTC m=+965.924302507" lastFinishedPulling="2026-01-30 05:23:53.362014196 +0000 UTC m=+968.731924463" observedRunningTime="2026-01-30 05:23:54.092500103 +0000 UTC m=+969.462410360" watchObservedRunningTime="2026-01-30 05:23:54.101131541 +0000 UTC m=+969.471041798" Jan 30 05:23:54 crc kubenswrapper[4931]: I0130 05:23:54.131754 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" podStartSLOduration=33.601185165 podStartE2EDuration="36.131728532s" podCreationTimestamp="2026-01-30 05:23:18 +0000 UTC" firstStartedPulling="2026-01-30 05:23:50.846922114 +0000 UTC m=+966.216832371" lastFinishedPulling="2026-01-30 05:23:53.377465481 +0000 UTC m=+968.747375738" observedRunningTime="2026-01-30 05:23:54.130174409 +0000 UTC m=+969.500084676" watchObservedRunningTime="2026-01-30 05:23:54.131728532 +0000 UTC m=+969.501638799" Jan 30 05:23:55 crc kubenswrapper[4931]: I0130 05:23:55.081974 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:23:58 crc kubenswrapper[4931]: I0130 05:23:58.382746 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-bf56z" Jan 30 05:23:58 crc kubenswrapper[4931]: I0130 05:23:58.529404 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-v9fgj" Jan 30 05:23:58 crc kubenswrapper[4931]: I0130 05:23:58.610663 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-5sgtg" Jan 30 05:24:00 crc kubenswrapper[4931]: I0130 05:24:00.316563 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-tzxqv" Jan 30 05:24:00 crc kubenswrapper[4931]: I0130 05:24:00.590560 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp" Jan 30 05:24:00 crc kubenswrapper[4931]: I0130 05:24:00.913668 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-qpp9f" Jan 30 05:24:15 crc kubenswrapper[4931]: I0130 05:24:15.986839 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-lfczj"] Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.006444 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.009599 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-lfczj"] Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.010903 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.011757 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.012020 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.012324 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rmhq8" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.031300 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-249jr"] Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.032336 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.036953 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.038116 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f097854-006c-4110-bec8-b9d364ddb000-config\") pod \"dnsmasq-dns-84bb9d8bd9-lfczj\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.038172 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99sh\" (UniqueName: \"kubernetes.io/projected/7f097854-006c-4110-bec8-b9d364ddb000-kube-api-access-q99sh\") pod \"dnsmasq-dns-84bb9d8bd9-lfczj\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.051936 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-249jr"] Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.139839 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99sh\" (UniqueName: \"kubernetes.io/projected/7f097854-006c-4110-bec8-b9d364ddb000-kube-api-access-q99sh\") pod \"dnsmasq-dns-84bb9d8bd9-lfczj\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.139907 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-config\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.139932 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7lm\" (UniqueName: \"kubernetes.io/projected/942de512-1fdc-4955-a703-ccd872474993-kube-api-access-kx7lm\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.140032 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-dns-svc\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.140060 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f097854-006c-4110-bec8-b9d364ddb000-config\") pod \"dnsmasq-dns-84bb9d8bd9-lfczj\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.141214 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f097854-006c-4110-bec8-b9d364ddb000-config\") pod \"dnsmasq-dns-84bb9d8bd9-lfczj\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.158747 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99sh\" (UniqueName: \"kubernetes.io/projected/7f097854-006c-4110-bec8-b9d364ddb000-kube-api-access-q99sh\") pod \"dnsmasq-dns-84bb9d8bd9-lfczj\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.241596 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-config\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.241652 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7lm\" (UniqueName: \"kubernetes.io/projected/942de512-1fdc-4955-a703-ccd872474993-kube-api-access-kx7lm\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.241753 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-dns-svc\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.242873 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-dns-svc\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.242955 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-config\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.269155 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7lm\" (UniqueName: \"kubernetes.io/projected/942de512-1fdc-4955-a703-ccd872474993-kube-api-access-kx7lm\") pod \"dnsmasq-dns-5f854695bc-249jr\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.338109 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.372976 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.624331 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-lfczj"] Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.639929 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 05:24:16 crc kubenswrapper[4931]: I0130 05:24:16.695285 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-249jr"] Jan 30 05:24:16 crc kubenswrapper[4931]: W0130 05:24:16.696100 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod942de512_1fdc_4955_a703_ccd872474993.slice/crio-e64df3f8ede918e6ed1e054ec67159e4720f4b2da72fde95107c95f43f1ee3d0 WatchSource:0}: Error finding container e64df3f8ede918e6ed1e054ec67159e4720f4b2da72fde95107c95f43f1ee3d0: Status 404 returned error can't find the container with id e64df3f8ede918e6ed1e054ec67159e4720f4b2da72fde95107c95f43f1ee3d0 Jan 30 05:24:17 crc kubenswrapper[4931]: I0130 05:24:17.270651 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" event={"ID":"7f097854-006c-4110-bec8-b9d364ddb000","Type":"ContainerStarted","Data":"bbc8e313c79f2552b4495d71bd60cf2000cede24398da8824f1260de69102011"} Jan 30 05:24:17 crc kubenswrapper[4931]: I0130 05:24:17.272168 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-249jr" event={"ID":"942de512-1fdc-4955-a703-ccd872474993","Type":"ContainerStarted","Data":"e64df3f8ede918e6ed1e054ec67159e4720f4b2da72fde95107c95f43f1ee3d0"} Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.564113 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-lfczj"] Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.585118 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tbgfx"] Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.586858 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.601915 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tbgfx"] Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.684906 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-config\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.684972 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkvn6\" (UniqueName: \"kubernetes.io/projected/75ff6901-d6c9-467a-a4d2-35ddb8050570-kube-api-access-wkvn6\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.685024 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.786225 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-config\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.786273 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkvn6\" (UniqueName: \"kubernetes.io/projected/75ff6901-d6c9-467a-a4d2-35ddb8050570-kube-api-access-wkvn6\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.786308 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.787203 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.787592 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-config\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.815496 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkvn6\" (UniqueName: \"kubernetes.io/projected/75ff6901-d6c9-467a-a4d2-35ddb8050570-kube-api-access-wkvn6\") pod \"dnsmasq-dns-744ffd65bc-tbgfx\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.872398 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-249jr"] Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.899917 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9vtmt"] Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.901514 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.909876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:18 crc kubenswrapper[4931]: I0130 05:24:18.911365 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9vtmt"] Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.090331 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-dns-svc\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.090384 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-config\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.090460 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxhw\" (UniqueName: \"kubernetes.io/projected/7358be48-7c82-45bd-8165-0c02dcdb3666-kube-api-access-knxhw\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.195218 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knxhw\" (UniqueName: \"kubernetes.io/projected/7358be48-7c82-45bd-8165-0c02dcdb3666-kube-api-access-knxhw\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.195302 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-dns-svc\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.195331 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-config\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.196136 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-config\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.198237 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-dns-svc\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.218198 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knxhw\" (UniqueName: \"kubernetes.io/projected/7358be48-7c82-45bd-8165-0c02dcdb3666-kube-api-access-knxhw\") pod \"dnsmasq-dns-95f5f6995-9vtmt\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.225284 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.444761 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tbgfx"] Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.648960 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9vtmt"] Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.702410 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.703487 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.706438 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.706506 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.706643 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.706704 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.706804 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.707004 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.707109 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bn5cs" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.719533 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904255 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904298 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904339 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc3f4796-66b1-452b-afca-5e62cbf2a53b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904365 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6xv5\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-kube-api-access-d6xv5\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904381 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904406 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904554 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904773 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904832 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc3f4796-66b1-452b-afca-5e62cbf2a53b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:19 crc kubenswrapper[4931]: I0130 05:24:19.904905 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.006457 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.006958 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.006983 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.007152 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.007300 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.008879 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc3f4796-66b1-452b-afca-5e62cbf2a53b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.008945 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.008966 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6xv5\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-kube-api-access-d6xv5\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.009033 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.009111 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.009130 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.009142 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.009339 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.009491 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc3f4796-66b1-452b-afca-5e62cbf2a53b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.010077 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.010496 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.017280 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.017922 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc3f4796-66b1-452b-afca-5e62cbf2a53b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.020949 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.021630 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.023736 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.024335 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.024485 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.025204 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.025502 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-smjgk" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.025595 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.025704 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.025843 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.025904 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.030133 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.030560 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.030877 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc3f4796-66b1-452b-afca-5e62cbf2a53b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.049521 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6xv5\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-kube-api-access-d6xv5\") pod \"rabbitmq-server-0\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.214965 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.215016 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.215075 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.216343 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/081e3873-ea99-4486-925f-784a98e49405-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.217444 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.217537 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.217595 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.217637 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrm76\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-kube-api-access-rrm76\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.217682 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/081e3873-ea99-4486-925f-784a98e49405-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.217707 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.217737 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.321850 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrm76\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-kube-api-access-rrm76\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.321908 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/081e3873-ea99-4486-925f-784a98e49405-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.321927 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.321950 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.321973 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.321990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.322030 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.322058 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/081e3873-ea99-4486-925f-784a98e49405-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.322072 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.322092 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.322259 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.322613 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.322911 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.323040 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.323235 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.323907 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.324286 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.324304 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.329856 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/081e3873-ea99-4486-925f-784a98e49405-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.331181 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.342212 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrm76\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-kube-api-access-rrm76\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.343398 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/081e3873-ea99-4486-925f-784a98e49405-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.345125 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.352390 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:20 crc kubenswrapper[4931]: I0130 05:24:20.429535 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.360777 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.365876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.375010 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.375961 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.376408 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.376577 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ms6mr" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.380907 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.382051 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.551341 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-default\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.551394 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5fq4\" (UniqueName: \"kubernetes.io/projected/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kube-api-access-t5fq4\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.551436 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-operator-scripts\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.551626 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.551724 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-generated\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.551870 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kolla-config\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.551927 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.552085 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.653654 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-default\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.653705 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5fq4\" (UniqueName: \"kubernetes.io/projected/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kube-api-access-t5fq4\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.653732 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-operator-scripts\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.653763 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.653969 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-generated\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.654007 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kolla-config\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.654031 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.654118 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.655096 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-default\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.655129 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-generated\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.655239 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kolla-config\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.655432 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.656193 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-operator-scripts\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.659205 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.664117 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.673276 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5fq4\" (UniqueName: \"kubernetes.io/projected/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kube-api-access-t5fq4\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.680072 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " pod="openstack/openstack-galera-0" Jan 30 05:24:21 crc kubenswrapper[4931]: I0130 05:24:21.713909 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.809284 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.815618 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.822793 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.823139 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.823381 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wvx2b" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.824389 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.852765 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974417 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974487 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974522 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974549 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbw9z\" (UniqueName: \"kubernetes.io/projected/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kube-api-access-fbw9z\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974581 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974676 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974705 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:22 crc kubenswrapper[4931]: I0130 05:24:22.974727 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078544 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078602 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078624 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078666 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078688 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078708 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078729 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbw9z\" (UniqueName: \"kubernetes.io/projected/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kube-api-access-fbw9z\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.078752 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.079471 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.079698 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.080060 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.080805 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.080921 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.084764 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.085974 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.095727 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.097036 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.098823 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.098875 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4v8gl" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.099856 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.106728 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbw9z\" (UniqueName: \"kubernetes.io/projected/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kube-api-access-fbw9z\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.125923 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.145408 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.156738 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.179946 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kolla-config\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.179999 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-config-data\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.180149 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l65cq\" (UniqueName: \"kubernetes.io/projected/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kube-api-access-l65cq\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.180246 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.180286 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.282148 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l65cq\" (UniqueName: \"kubernetes.io/projected/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kube-api-access-l65cq\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.282204 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.282231 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.282282 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kolla-config\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.282311 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-config-data\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.283200 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kolla-config\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.283208 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-config-data\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.286886 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.288826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.297207 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l65cq\" (UniqueName: \"kubernetes.io/projected/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kube-api-access-l65cq\") pod \"memcached-0\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: I0130 05:24:23.478159 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 05:24:23 crc kubenswrapper[4931]: W0130 05:24:23.612802 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75ff6901_d6c9_467a_a4d2_35ddb8050570.slice/crio-d42db6c19bc5af9b7f0bda95ed922048afc3e58c472145aa59ba1e17a5184206 WatchSource:0}: Error finding container d42db6c19bc5af9b7f0bda95ed922048afc3e58c472145aa59ba1e17a5184206: Status 404 returned error can't find the container with id d42db6c19bc5af9b7f0bda95ed922048afc3e58c472145aa59ba1e17a5184206 Jan 30 05:24:23 crc kubenswrapper[4931]: W0130 05:24:23.613307 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7358be48_7c82_45bd_8165_0c02dcdb3666.slice/crio-16572f431ff9e8b9359ca1361817097bd7aaff980c134f42bc30a47f2e623bfc WatchSource:0}: Error finding container 16572f431ff9e8b9359ca1361817097bd7aaff980c134f42bc30a47f2e623bfc: Status 404 returned error can't find the container with id 16572f431ff9e8b9359ca1361817097bd7aaff980c134f42bc30a47f2e623bfc Jan 30 05:24:24 crc kubenswrapper[4931]: I0130 05:24:24.328738 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" event={"ID":"75ff6901-d6c9-467a-a4d2-35ddb8050570","Type":"ContainerStarted","Data":"d42db6c19bc5af9b7f0bda95ed922048afc3e58c472145aa59ba1e17a5184206"} Jan 30 05:24:24 crc kubenswrapper[4931]: I0130 05:24:24.329911 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" event={"ID":"7358be48-7c82-45bd-8165-0c02dcdb3666","Type":"ContainerStarted","Data":"16572f431ff9e8b9359ca1361817097bd7aaff980c134f42bc30a47f2e623bfc"} Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.011008 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.012493 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.015498 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-b6t4t" Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.017510 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.112960 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqss9\" (UniqueName: \"kubernetes.io/projected/75e7b62f-8246-48b8-bcbb-d7c5129dd5e2-kube-api-access-tqss9\") pod \"kube-state-metrics-0\" (UID: \"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2\") " pod="openstack/kube-state-metrics-0" Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.215306 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqss9\" (UniqueName: \"kubernetes.io/projected/75e7b62f-8246-48b8-bcbb-d7c5129dd5e2-kube-api-access-tqss9\") pod \"kube-state-metrics-0\" (UID: \"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2\") " pod="openstack/kube-state-metrics-0" Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.238261 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqss9\" (UniqueName: \"kubernetes.io/projected/75e7b62f-8246-48b8-bcbb-d7c5129dd5e2-kube-api-access-tqss9\") pod \"kube-state-metrics-0\" (UID: \"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2\") " pod="openstack/kube-state-metrics-0" Jan 30 05:24:25 crc kubenswrapper[4931]: I0130 05:24:25.370387 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.975163 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ggjtl"] Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.976788 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.979694 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zlq6r" Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.980219 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.987656 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-thxc2"] Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.987850 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.990067 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:28 crc kubenswrapper[4931]: I0130 05:24:28.993876 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ggjtl"] Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077171 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-ovn-controller-tls-certs\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077219 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-log-ovn\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077243 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077261 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-log\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077280 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-lib\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run-ovn\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077311 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7snh\" (UniqueName: \"kubernetes.io/projected/8a337463-8b7e-496b-9a01-fc491120c21d-kube-api-access-l7snh\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077329 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-etc-ovs\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077373 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d259g\" (UniqueName: \"kubernetes.io/projected/5732e34e-6330-4a36-9082-dbb50eede9f2-kube-api-access-d259g\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077396 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-run\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077453 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5732e34e-6330-4a36-9082-dbb50eede9f2-scripts\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077472 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a337463-8b7e-496b-9a01-fc491120c21d-scripts\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.077500 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-combined-ca-bundle\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.080382 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-thxc2"] Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178719 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d259g\" (UniqueName: \"kubernetes.io/projected/5732e34e-6330-4a36-9082-dbb50eede9f2-kube-api-access-d259g\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178764 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-run\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178794 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5732e34e-6330-4a36-9082-dbb50eede9f2-scripts\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178816 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a337463-8b7e-496b-9a01-fc491120c21d-scripts\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178841 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-combined-ca-bundle\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-ovn-controller-tls-certs\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178885 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-log-ovn\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178905 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178926 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-log\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178946 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-lib\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178972 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run-ovn\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.178988 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7snh\" (UniqueName: \"kubernetes.io/projected/8a337463-8b7e-496b-9a01-fc491120c21d-kube-api-access-l7snh\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.179009 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-etc-ovs\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.179557 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-etc-ovs\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.179972 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run-ovn\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.180096 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.180121 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-run\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.182059 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a337463-8b7e-496b-9a01-fc491120c21d-scripts\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.182166 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-log\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.182210 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-log-ovn\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.182366 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-lib\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.183690 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5732e34e-6330-4a36-9082-dbb50eede9f2-scripts\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.185186 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-ovn-controller-tls-certs\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.187514 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-combined-ca-bundle\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.192663 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d259g\" (UniqueName: \"kubernetes.io/projected/5732e34e-6330-4a36-9082-dbb50eede9f2-kube-api-access-d259g\") pod \"ovn-controller-ovs-thxc2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.197191 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7snh\" (UniqueName: \"kubernetes.io/projected/8a337463-8b7e-496b-9a01-fc491120c21d-kube-api-access-l7snh\") pod \"ovn-controller-ggjtl\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.335386 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.339123 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.482299 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.484411 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.486361 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.486885 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.487186 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.487917 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.489179 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-s2vdq" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.514872 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586431 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2l8\" (UniqueName: \"kubernetes.io/projected/49a63fb4-24bc-4834-b6e7-937688c5de09-kube-api-access-6f2l8\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586478 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586508 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586534 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586621 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586638 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-config\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.586655 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.688643 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.688763 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.688796 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-config\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.688839 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.688877 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2l8\" (UniqueName: \"kubernetes.io/projected/49a63fb4-24bc-4834-b6e7-937688c5de09-kube-api-access-6f2l8\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.688925 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.688981 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.689029 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.689996 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-config\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.690273 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.691058 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.692578 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.699548 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.701977 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.711086 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.714173 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2l8\" (UniqueName: \"kubernetes.io/projected/49a63fb4-24bc-4834-b6e7-937688c5de09-kube-api-access-6f2l8\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.720165 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:29 crc kubenswrapper[4931]: I0130 05:24:29.808876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:31 crc kubenswrapper[4931]: I0130 05:24:31.383103 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 05:24:32 crc kubenswrapper[4931]: E0130 05:24:32.108703 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 30 05:24:32 crc kubenswrapper[4931]: E0130 05:24:32.108873 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kx7lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-249jr_openstack(942de512-1fdc-4955-a703-ccd872474993): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 05:24:32 crc kubenswrapper[4931]: E0130 05:24:32.110204 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-249jr" podUID="942de512-1fdc-4955-a703-ccd872474993" Jan 30 05:24:32 crc kubenswrapper[4931]: W0130 05:24:32.120173 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod348ffd7a_9b7f_40aa_ada9_145a3a783d09.slice/crio-b84c7628d09612f5f198418a62d4a2daabe598f826560ac2746867af05368a8f WatchSource:0}: Error finding container b84c7628d09612f5f198418a62d4a2daabe598f826560ac2746867af05368a8f: Status 404 returned error can't find the container with id b84c7628d09612f5f198418a62d4a2daabe598f826560ac2746867af05368a8f Jan 30 05:24:32 crc kubenswrapper[4931]: E0130 05:24:32.142110 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 30 05:24:32 crc kubenswrapper[4931]: E0130 05:24:32.142496 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q99sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-lfczj_openstack(7f097854-006c-4110-bec8-b9d364ddb000): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 05:24:32 crc kubenswrapper[4931]: E0130 05:24:32.143851 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" podUID="7f097854-006c-4110-bec8-b9d364ddb000" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.389856 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"348ffd7a-9b7f-40aa-ada9-145a3a783d09","Type":"ContainerStarted","Data":"b84c7628d09612f5f198418a62d4a2daabe598f826560ac2746867af05368a8f"} Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.745678 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.751662 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.785773 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.786916 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.790396 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.790446 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.790749 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-c7nvq" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.790880 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.793861 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.845644 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-dns-svc\") pod \"942de512-1fdc-4955-a703-ccd872474993\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.845812 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7lm\" (UniqueName: \"kubernetes.io/projected/942de512-1fdc-4955-a703-ccd872474993-kube-api-access-kx7lm\") pod \"942de512-1fdc-4955-a703-ccd872474993\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.845880 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-config\") pod \"942de512-1fdc-4955-a703-ccd872474993\" (UID: \"942de512-1fdc-4955-a703-ccd872474993\") " Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.845933 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q99sh\" (UniqueName: \"kubernetes.io/projected/7f097854-006c-4110-bec8-b9d364ddb000-kube-api-access-q99sh\") pod \"7f097854-006c-4110-bec8-b9d364ddb000\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.845980 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f097854-006c-4110-bec8-b9d364ddb000-config\") pod \"7f097854-006c-4110-bec8-b9d364ddb000\" (UID: \"7f097854-006c-4110-bec8-b9d364ddb000\") " Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846513 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846572 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mdl\" (UniqueName: \"kubernetes.io/projected/f28f211b-be26-4f15-92a1-36b91cb53bbb-kube-api-access-k9mdl\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846620 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846687 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846744 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846790 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846821 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.846856 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-config\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.848160 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "942de512-1fdc-4955-a703-ccd872474993" (UID: "942de512-1fdc-4955-a703-ccd872474993"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.850721 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-config" (OuterVolumeSpecName: "config") pod "942de512-1fdc-4955-a703-ccd872474993" (UID: "942de512-1fdc-4955-a703-ccd872474993"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.851289 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f097854-006c-4110-bec8-b9d364ddb000-config" (OuterVolumeSpecName: "config") pod "7f097854-006c-4110-bec8-b9d364ddb000" (UID: "7f097854-006c-4110-bec8-b9d364ddb000"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.854881 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f097854-006c-4110-bec8-b9d364ddb000-kube-api-access-q99sh" (OuterVolumeSpecName: "kube-api-access-q99sh") pod "7f097854-006c-4110-bec8-b9d364ddb000" (UID: "7f097854-006c-4110-bec8-b9d364ddb000"). InnerVolumeSpecName "kube-api-access-q99sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.855592 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942de512-1fdc-4955-a703-ccd872474993-kube-api-access-kx7lm" (OuterVolumeSpecName: "kube-api-access-kx7lm") pod "942de512-1fdc-4955-a703-ccd872474993" (UID: "942de512-1fdc-4955-a703-ccd872474993"). InnerVolumeSpecName "kube-api-access-kx7lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.925787 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.932230 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.938173 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ggjtl"] Jan 30 05:24:32 crc kubenswrapper[4931]: W0130 05:24:32.941705 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75e7b62f_8246_48b8_bcbb_d7c5129dd5e2.slice/crio-54dbd562a66dec0d4b1d17dc98e849f8bad3b54d165bc61c92a13695e75f4ae7 WatchSource:0}: Error finding container 54dbd562a66dec0d4b1d17dc98e849f8bad3b54d165bc61c92a13695e75f4ae7: Status 404 returned error can't find the container with id 54dbd562a66dec0d4b1d17dc98e849f8bad3b54d165bc61c92a13695e75f4ae7 Jan 30 05:24:32 crc kubenswrapper[4931]: W0130 05:24:32.941906 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a337463_8b7e_496b_9a01_fc491120c21d.slice/crio-c2331a0e3efc476073fa6f72907e46cdc0fd3358dd0c363648234586881ae09d WatchSource:0}: Error finding container c2331a0e3efc476073fa6f72907e46cdc0fd3358dd0c363648234586881ae09d: Status 404 returned error can't find the container with id c2331a0e3efc476073fa6f72907e46cdc0fd3358dd0c363648234586881ae09d Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948622 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948661 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9mdl\" (UniqueName: \"kubernetes.io/projected/f28f211b-be26-4f15-92a1-36b91cb53bbb-kube-api-access-k9mdl\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948688 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948728 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948767 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948791 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948808 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948827 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-config\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948887 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7lm\" (UniqueName: \"kubernetes.io/projected/942de512-1fdc-4955-a703-ccd872474993-kube-api-access-kx7lm\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948898 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948908 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q99sh\" (UniqueName: \"kubernetes.io/projected/7f097854-006c-4110-bec8-b9d364ddb000-kube-api-access-q99sh\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948917 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f097854-006c-4110-bec8-b9d364ddb000-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.948924 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/942de512-1fdc-4955-a703-ccd872474993-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.949877 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-config\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.950139 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.950792 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.953670 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.954629 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.954645 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.955140 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.972936 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9mdl\" (UniqueName: \"kubernetes.io/projected/f28f211b-be26-4f15-92a1-36b91cb53bbb-kube-api-access-k9mdl\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.975488 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.983509 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.987445 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 05:24:32 crc kubenswrapper[4931]: I0130 05:24:32.992751 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 05:24:33 crc kubenswrapper[4931]: W0130 05:24:33.012641 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081e3873_ea99_4486_925f_784a98e49405.slice/crio-9ce481797a1f7988304010979cd564d60b819812a50932395d5f66e51b07187f WatchSource:0}: Error finding container 9ce481797a1f7988304010979cd564d60b819812a50932395d5f66e51b07187f: Status 404 returned error can't find the container with id 9ce481797a1f7988304010979cd564d60b819812a50932395d5f66e51b07187f Jan 30 05:24:33 crc kubenswrapper[4931]: W0130 05:24:33.014876 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc3f4796_66b1_452b_afca_5e62cbf2a53b.slice/crio-29ea210245e2f099c89b0f1dd11f3b873ad2e58d0c37d026cc8cfb61ec6d3cfa WatchSource:0}: Error finding container 29ea210245e2f099c89b0f1dd11f3b873ad2e58d0c37d026cc8cfb61ec6d3cfa: Status 404 returned error can't find the container with id 29ea210245e2f099c89b0f1dd11f3b873ad2e58d0c37d026cc8cfb61ec6d3cfa Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.059825 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-thxc2"] Jan 30 05:24:33 crc kubenswrapper[4931]: W0130 05:24:33.065587 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5732e34e_6330_4a36_9082_dbb50eede9f2.slice/crio-f259a70451b1edb6023ad4c42bb1037e4e2cbc756eed3599105a7d0ba07dc5ac WatchSource:0}: Error finding container f259a70451b1edb6023ad4c42bb1037e4e2cbc756eed3599105a7d0ba07dc5ac: Status 404 returned error can't find the container with id f259a70451b1edb6023ad4c42bb1037e4e2cbc756eed3599105a7d0ba07dc5ac Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.117802 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.396842 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-thxc2" event={"ID":"5732e34e-6330-4a36-9082-dbb50eede9f2","Type":"ContainerStarted","Data":"f259a70451b1edb6023ad4c42bb1037e4e2cbc756eed3599105a7d0ba07dc5ac"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.398839 4931 generic.go:334] "Generic (PLEG): container finished" podID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerID="d84e00fba877076358074492301ade07a08806d5e1a8e09523a5b7de67a88279" exitCode=0 Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.398883 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" event={"ID":"7358be48-7c82-45bd-8165-0c02dcdb3666","Type":"ContainerDied","Data":"d84e00fba877076358074492301ade07a08806d5e1a8e09523a5b7de67a88279"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.400525 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2","Type":"ContainerStarted","Data":"54dbd562a66dec0d4b1d17dc98e849f8bad3b54d165bc61c92a13695e75f4ae7"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.404469 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef","Type":"ContainerStarted","Data":"651858dcd740868b54f1818387952f7e3dd92b06537502abf826f277b0f1c2f7"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.407064 4931 generic.go:334] "Generic (PLEG): container finished" podID="75ff6901-d6c9-467a-a4d2-35ddb8050570" containerID="cf669d89126cd05876fe2026bdc44224135e63c9e8ec5899f87342a850974a32" exitCode=0 Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.407116 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" event={"ID":"75ff6901-d6c9-467a-a4d2-35ddb8050570","Type":"ContainerDied","Data":"cf669d89126cd05876fe2026bdc44224135e63c9e8ec5899f87342a850974a32"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.408934 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-249jr" Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.408981 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-249jr" event={"ID":"942de512-1fdc-4955-a703-ccd872474993","Type":"ContainerDied","Data":"e64df3f8ede918e6ed1e054ec67159e4720f4b2da72fde95107c95f43f1ee3d0"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.413745 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba","Type":"ContainerStarted","Data":"04861bcc57b9390c9ad1874bbf632a1a5e0da259d664ad8c22e1c2db45c343a6"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.450947 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.457848 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"081e3873-ea99-4486-925f-784a98e49405","Type":"ContainerStarted","Data":"9ce481797a1f7988304010979cd564d60b819812a50932395d5f66e51b07187f"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.457908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fc3f4796-66b1-452b-afca-5e62cbf2a53b","Type":"ContainerStarted","Data":"29ea210245e2f099c89b0f1dd11f3b873ad2e58d0c37d026cc8cfb61ec6d3cfa"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.457928 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-lfczj" event={"ID":"7f097854-006c-4110-bec8-b9d364ddb000","Type":"ContainerDied","Data":"bbc8e313c79f2552b4495d71bd60cf2000cede24398da8824f1260de69102011"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.467677 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ggjtl" event={"ID":"8a337463-8b7e-496b-9a01-fc491120c21d","Type":"ContainerStarted","Data":"c2331a0e3efc476073fa6f72907e46cdc0fd3358dd0c363648234586881ae09d"} Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.530308 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-249jr"] Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.535290 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-249jr"] Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.597486 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-lfczj"] Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.602815 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-lfczj"] Jan 30 05:24:33 crc kubenswrapper[4931]: I0130 05:24:33.693010 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 05:24:33 crc kubenswrapper[4931]: W0130 05:24:33.734180 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf28f211b_be26_4f15_92a1_36b91cb53bbb.slice/crio-920213aded2a6124fc2a4c0ef0f31260bf1b62f8e7693371989b52df16882f74 WatchSource:0}: Error finding container 920213aded2a6124fc2a4c0ef0f31260bf1b62f8e7693371989b52df16882f74: Status 404 returned error can't find the container with id 920213aded2a6124fc2a4c0ef0f31260bf1b62f8e7693371989b52df16882f74 Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.076997 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.150099 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dvktv"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.151281 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.154469 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.170583 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ntv\" (UniqueName: \"kubernetes.io/projected/4ba289fc-17e9-45e9-ac24-434d69045d97-kube-api-access-j6ntv\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.171171 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-combined-ca-bundle\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.171394 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.171568 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovn-rundir\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.171637 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba289fc-17e9-45e9-ac24-434d69045d97-config\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.171666 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovs-rundir\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.174844 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dvktv"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.272363 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovn-rundir\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.272433 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba289fc-17e9-45e9-ac24-434d69045d97-config\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.272455 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovs-rundir\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.272494 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6ntv\" (UniqueName: \"kubernetes.io/projected/4ba289fc-17e9-45e9-ac24-434d69045d97-kube-api-access-j6ntv\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.272522 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-combined-ca-bundle\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.272584 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.272712 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovn-rundir\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.273219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba289fc-17e9-45e9-ac24-434d69045d97-config\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.273546 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovs-rundir\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.280740 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.289040 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6ntv\" (UniqueName: \"kubernetes.io/projected/4ba289fc-17e9-45e9-ac24-434d69045d97-kube-api-access-j6ntv\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.301015 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-combined-ca-bundle\") pod \"ovn-controller-metrics-dvktv\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.335012 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tbgfx"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.340309 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794868bd45-bzjkd"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.348577 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.351366 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.381093 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-bzjkd"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.452307 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9vtmt"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.468530 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.470076 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v6tmx"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.475171 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.478600 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.483389 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-config\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.483533 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.483558 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv89g\" (UniqueName: \"kubernetes.io/projected/a34b87df-8978-4e2d-9875-a6b81a09fa84-kube-api-access-xv89g\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.483594 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-dns-svc\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.498507 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v6tmx"] Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.508141 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" event={"ID":"7358be48-7c82-45bd-8165-0c02dcdb3666","Type":"ContainerStarted","Data":"4608c1f4973c251ac1bb4b51d3c1df7c1d0e5b8e2d0e5905c0643537c2687704"} Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.508450 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.519951 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f28f211b-be26-4f15-92a1-36b91cb53bbb","Type":"ContainerStarted","Data":"920213aded2a6124fc2a4c0ef0f31260bf1b62f8e7693371989b52df16882f74"} Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.527589 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" podStartSLOduration=7.907460806 podStartE2EDuration="16.527575623s" podCreationTimestamp="2026-01-30 05:24:18 +0000 UTC" firstStartedPulling="2026-01-30 05:24:23.622727358 +0000 UTC m=+998.992637625" lastFinishedPulling="2026-01-30 05:24:32.242842185 +0000 UTC m=+1007.612752442" observedRunningTime="2026-01-30 05:24:34.526248625 +0000 UTC m=+1009.896158882" watchObservedRunningTime="2026-01-30 05:24:34.527575623 +0000 UTC m=+1009.897485880" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.585588 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586295 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586351 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv89g\" (UniqueName: \"kubernetes.io/projected/a34b87df-8978-4e2d-9875-a6b81a09fa84-kube-api-access-xv89g\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586414 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-config\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586468 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-dns-svc\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586584 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfk6\" (UniqueName: \"kubernetes.io/projected/c052a747-4d6e-459f-80c2-b690015e411d-kube-api-access-dpfk6\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586631 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586727 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-config\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.586774 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.588797 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-config\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.589478 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-dns-svc\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.589910 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.603795 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv89g\" (UniqueName: \"kubernetes.io/projected/a34b87df-8978-4e2d-9875-a6b81a09fa84-kube-api-access-xv89g\") pod \"dnsmasq-dns-794868bd45-bzjkd\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: E0130 05:24:34.645352 4931 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 30 05:24:34 crc kubenswrapper[4931]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/75ff6901-d6c9-467a-a4d2-35ddb8050570/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 05:24:34 crc kubenswrapper[4931]: > podSandboxID="d42db6c19bc5af9b7f0bda95ed922048afc3e58c472145aa59ba1e17a5184206" Jan 30 05:24:34 crc kubenswrapper[4931]: E0130 05:24:34.645574 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:24:34 crc kubenswrapper[4931]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wkvn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-tbgfx_openstack(75ff6901-d6c9-467a-a4d2-35ddb8050570): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/75ff6901-d6c9-467a-a4d2-35ddb8050570/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 30 05:24:34 crc kubenswrapper[4931]: > logger="UnhandledError" Jan 30 05:24:34 crc kubenswrapper[4931]: E0130 05:24:34.646950 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/75ff6901-d6c9-467a-a4d2-35ddb8050570/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" podUID="75ff6901-d6c9-467a-a4d2-35ddb8050570" Jan 30 05:24:34 crc kubenswrapper[4931]: W0130 05:24:34.649990 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49a63fb4_24bc_4834_b6e7_937688c5de09.slice/crio-d92e6599441617be3a228c318ab5084a192dbb4da1df24b47362bc9f2366da37 WatchSource:0}: Error finding container d92e6599441617be3a228c318ab5084a192dbb4da1df24b47362bc9f2366da37: Status 404 returned error can't find the container with id d92e6599441617be3a228c318ab5084a192dbb4da1df24b47362bc9f2366da37 Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.688453 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.688530 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.688568 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-config\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.688595 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfk6\" (UniqueName: \"kubernetes.io/projected/c052a747-4d6e-459f-80c2-b690015e411d-kube-api-access-dpfk6\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.688615 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.689624 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.689684 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-config\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.689707 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.691437 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.693164 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.706371 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfk6\" (UniqueName: \"kubernetes.io/projected/c052a747-4d6e-459f-80c2-b690015e411d-kube-api-access-dpfk6\") pod \"dnsmasq-dns-757dc6fff9-v6tmx\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:34 crc kubenswrapper[4931]: I0130 05:24:34.809834 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:35 crc kubenswrapper[4931]: I0130 05:24:35.434900 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f097854-006c-4110-bec8-b9d364ddb000" path="/var/lib/kubelet/pods/7f097854-006c-4110-bec8-b9d364ddb000/volumes" Jan 30 05:24:35 crc kubenswrapper[4931]: I0130 05:24:35.435329 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942de512-1fdc-4955-a703-ccd872474993" path="/var/lib/kubelet/pods/942de512-1fdc-4955-a703-ccd872474993/volumes" Jan 30 05:24:35 crc kubenswrapper[4931]: I0130 05:24:35.530319 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"49a63fb4-24bc-4834-b6e7-937688c5de09","Type":"ContainerStarted","Data":"d92e6599441617be3a228c318ab5084a192dbb4da1df24b47362bc9f2366da37"} Jan 30 05:24:35 crc kubenswrapper[4931]: I0130 05:24:35.530440 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerName="dnsmasq-dns" containerID="cri-o://4608c1f4973c251ac1bb4b51d3c1df7c1d0e5b8e2d0e5905c0643537c2687704" gracePeriod=10 Jan 30 05:24:36 crc kubenswrapper[4931]: I0130 05:24:36.541882 4931 generic.go:334] "Generic (PLEG): container finished" podID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerID="4608c1f4973c251ac1bb4b51d3c1df7c1d0e5b8e2d0e5905c0643537c2687704" exitCode=0 Jan 30 05:24:36 crc kubenswrapper[4931]: I0130 05:24:36.541936 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" event={"ID":"7358be48-7c82-45bd-8165-0c02dcdb3666","Type":"ContainerDied","Data":"4608c1f4973c251ac1bb4b51d3c1df7c1d0e5b8e2d0e5905c0643537c2687704"} Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.555658 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" event={"ID":"75ff6901-d6c9-467a-a4d2-35ddb8050570","Type":"ContainerDied","Data":"d42db6c19bc5af9b7f0bda95ed922048afc3e58c472145aa59ba1e17a5184206"} Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.555866 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42db6c19bc5af9b7f0bda95ed922048afc3e58c472145aa59ba1e17a5184206" Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.589606 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.745116 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkvn6\" (UniqueName: \"kubernetes.io/projected/75ff6901-d6c9-467a-a4d2-35ddb8050570-kube-api-access-wkvn6\") pod \"75ff6901-d6c9-467a-a4d2-35ddb8050570\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.745369 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-dns-svc\") pod \"75ff6901-d6c9-467a-a4d2-35ddb8050570\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.745485 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-config\") pod \"75ff6901-d6c9-467a-a4d2-35ddb8050570\" (UID: \"75ff6901-d6c9-467a-a4d2-35ddb8050570\") " Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.757664 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ff6901-d6c9-467a-a4d2-35ddb8050570-kube-api-access-wkvn6" (OuterVolumeSpecName: "kube-api-access-wkvn6") pod "75ff6901-d6c9-467a-a4d2-35ddb8050570" (UID: "75ff6901-d6c9-467a-a4d2-35ddb8050570"). InnerVolumeSpecName "kube-api-access-wkvn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.794360 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-config" (OuterVolumeSpecName: "config") pod "75ff6901-d6c9-467a-a4d2-35ddb8050570" (UID: "75ff6901-d6c9-467a-a4d2-35ddb8050570"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.796280 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75ff6901-d6c9-467a-a4d2-35ddb8050570" (UID: "75ff6901-d6c9-467a-a4d2-35ddb8050570"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.848232 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkvn6\" (UniqueName: \"kubernetes.io/projected/75ff6901-d6c9-467a-a4d2-35ddb8050570-kube-api-access-wkvn6\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.848305 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:37 crc kubenswrapper[4931]: I0130 05:24:37.848315 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ff6901-d6c9-467a-a4d2-35ddb8050570-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:38 crc kubenswrapper[4931]: I0130 05:24:38.569977 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-tbgfx" Jan 30 05:24:38 crc kubenswrapper[4931]: I0130 05:24:38.631774 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tbgfx"] Jan 30 05:24:38 crc kubenswrapper[4931]: I0130 05:24:38.637168 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-tbgfx"] Jan 30 05:24:39 crc kubenswrapper[4931]: I0130 05:24:39.433907 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ff6901-d6c9-467a-a4d2-35ddb8050570" path="/var/lib/kubelet/pods/75ff6901-d6c9-467a-a4d2-35ddb8050570/volumes" Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.668017 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.802332 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knxhw\" (UniqueName: \"kubernetes.io/projected/7358be48-7c82-45bd-8165-0c02dcdb3666-kube-api-access-knxhw\") pod \"7358be48-7c82-45bd-8165-0c02dcdb3666\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.802481 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-config\") pod \"7358be48-7c82-45bd-8165-0c02dcdb3666\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.802586 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-dns-svc\") pod \"7358be48-7c82-45bd-8165-0c02dcdb3666\" (UID: \"7358be48-7c82-45bd-8165-0c02dcdb3666\") " Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.808938 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7358be48-7c82-45bd-8165-0c02dcdb3666-kube-api-access-knxhw" (OuterVolumeSpecName: "kube-api-access-knxhw") pod "7358be48-7c82-45bd-8165-0c02dcdb3666" (UID: "7358be48-7c82-45bd-8165-0c02dcdb3666"). InnerVolumeSpecName "kube-api-access-knxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.855564 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-config" (OuterVolumeSpecName: "config") pod "7358be48-7c82-45bd-8165-0c02dcdb3666" (UID: "7358be48-7c82-45bd-8165-0c02dcdb3666"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.859383 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7358be48-7c82-45bd-8165-0c02dcdb3666" (UID: "7358be48-7c82-45bd-8165-0c02dcdb3666"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.904033 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knxhw\" (UniqueName: \"kubernetes.io/projected/7358be48-7c82-45bd-8165-0c02dcdb3666-kube-api-access-knxhw\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.904071 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:40 crc kubenswrapper[4931]: I0130 05:24:40.904081 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7358be48-7c82-45bd-8165-0c02dcdb3666-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:41 crc kubenswrapper[4931]: I0130 05:24:41.608395 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" event={"ID":"7358be48-7c82-45bd-8165-0c02dcdb3666","Type":"ContainerDied","Data":"16572f431ff9e8b9359ca1361817097bd7aaff980c134f42bc30a47f2e623bfc"} Jan 30 05:24:41 crc kubenswrapper[4931]: I0130 05:24:41.608537 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" Jan 30 05:24:41 crc kubenswrapper[4931]: I0130 05:24:41.608556 4931 scope.go:117] "RemoveContainer" containerID="4608c1f4973c251ac1bb4b51d3c1df7c1d0e5b8e2d0e5905c0643537c2687704" Jan 30 05:24:41 crc kubenswrapper[4931]: I0130 05:24:41.790133 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9vtmt"] Jan 30 05:24:41 crc kubenswrapper[4931]: I0130 05:24:41.798809 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-9vtmt"] Jan 30 05:24:41 crc kubenswrapper[4931]: I0130 05:24:41.826064 4931 scope.go:117] "RemoveContainer" containerID="d84e00fba877076358074492301ade07a08806d5e1a8e09523a5b7de67a88279" Jan 30 05:24:42 crc kubenswrapper[4931]: I0130 05:24:42.669518 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dvktv"] Jan 30 05:24:42 crc kubenswrapper[4931]: I0130 05:24:42.898109 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-bzjkd"] Jan 30 05:24:42 crc kubenswrapper[4931]: I0130 05:24:42.904689 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v6tmx"] Jan 30 05:24:43 crc kubenswrapper[4931]: W0130 05:24:43.265382 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda34b87df_8978_4e2d_9875_a6b81a09fa84.slice/crio-d1724450fd332ef12f3dd92f8b86fd94720600e08c372a71b460370b61dc3ade WatchSource:0}: Error finding container d1724450fd332ef12f3dd92f8b86fd94720600e08c372a71b460370b61dc3ade: Status 404 returned error can't find the container with id d1724450fd332ef12f3dd92f8b86fd94720600e08c372a71b460370b61dc3ade Jan 30 05:24:43 crc kubenswrapper[4931]: I0130 05:24:43.457995 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" path="/var/lib/kubelet/pods/7358be48-7c82-45bd-8165-0c02dcdb3666/volumes" Jan 30 05:24:43 crc kubenswrapper[4931]: I0130 05:24:43.650930 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dvktv" event={"ID":"4ba289fc-17e9-45e9-ac24-434d69045d97","Type":"ContainerStarted","Data":"39a86ec198f21c9ed97c5b274927fc46f2f6f56ea606ee080f8268afe4d2241b"} Jan 30 05:24:43 crc kubenswrapper[4931]: I0130 05:24:43.653974 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" event={"ID":"c052a747-4d6e-459f-80c2-b690015e411d","Type":"ContainerStarted","Data":"d8ae5c6c06a93c29197bfde41e6a215859930a15dc388d2269865aa48021ba9a"} Jan 30 05:24:43 crc kubenswrapper[4931]: I0130 05:24:43.655142 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" event={"ID":"a34b87df-8978-4e2d-9875-a6b81a09fa84","Type":"ContainerStarted","Data":"d1724450fd332ef12f3dd92f8b86fd94720600e08c372a71b460370b61dc3ade"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.226284 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-95f5f6995-9vtmt" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.100:5353: i/o timeout" Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.669057 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"348ffd7a-9b7f-40aa-ada9-145a3a783d09","Type":"ContainerStarted","Data":"8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.678130 4931 generic.go:334] "Generic (PLEG): container finished" podID="c052a747-4d6e-459f-80c2-b690015e411d" containerID="2951358824ae5ca54f437c7afd5ea7478602f9317a7330914d36e2cd66c684f6" exitCode=0 Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.678762 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" event={"ID":"c052a747-4d6e-459f-80c2-b690015e411d","Type":"ContainerDied","Data":"2951358824ae5ca54f437c7afd5ea7478602f9317a7330914d36e2cd66c684f6"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.682620 4931 generic.go:334] "Generic (PLEG): container finished" podID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerID="f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054" exitCode=0 Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.682686 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-thxc2" event={"ID":"5732e34e-6330-4a36-9082-dbb50eede9f2","Type":"ContainerDied","Data":"f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.686178 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f28f211b-be26-4f15-92a1-36b91cb53bbb","Type":"ContainerStarted","Data":"8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.708967 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba","Type":"ContainerStarted","Data":"d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.709123 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.710708 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"081e3873-ea99-4486-925f-784a98e49405","Type":"ContainerStarted","Data":"4db4fc560f1e0be65146b56bdc4340b3ff1c5a4fe7510a353795090f99291213"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.711913 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2","Type":"ContainerStarted","Data":"b4e72393e02c3e0619fa1e4bea6d0742ffbef4de8775e7d96ccdc8545af19acb"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.711981 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.712977 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef","Type":"ContainerStarted","Data":"c1a9c2221cdd695e864a018795817226f2731d9a93a0ec46938ca160bd878ce0"} Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.786316 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.485921152 podStartE2EDuration="21.786297957s" podCreationTimestamp="2026-01-30 05:24:23 +0000 UTC" firstStartedPulling="2026-01-30 05:24:32.958193806 +0000 UTC m=+1008.328104063" lastFinishedPulling="2026-01-30 05:24:41.258570571 +0000 UTC m=+1016.628480868" observedRunningTime="2026-01-30 05:24:44.775786052 +0000 UTC m=+1020.145696329" watchObservedRunningTime="2026-01-30 05:24:44.786297957 +0000 UTC m=+1020.156208214" Jan 30 05:24:44 crc kubenswrapper[4931]: I0130 05:24:44.819860 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.403326916 podStartE2EDuration="20.819837408s" podCreationTimestamp="2026-01-30 05:24:24 +0000 UTC" firstStartedPulling="2026-01-30 05:24:32.957881127 +0000 UTC m=+1008.327791384" lastFinishedPulling="2026-01-30 05:24:43.374391609 +0000 UTC m=+1018.744301876" observedRunningTime="2026-01-30 05:24:44.814097977 +0000 UTC m=+1020.184008234" watchObservedRunningTime="2026-01-30 05:24:44.819837408 +0000 UTC m=+1020.189747665" Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.725737 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" event={"ID":"c052a747-4d6e-459f-80c2-b690015e411d","Type":"ContainerStarted","Data":"3eb355680179efbcbb2cf73e83f9b34f38755a348dc73a0a8db4b58a9c1de2f1"} Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.726692 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.727828 4931 generic.go:334] "Generic (PLEG): container finished" podID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerID="b4197288cd21de4e055a7299affb1cb43a53991838539d7286331173ba92c743" exitCode=0 Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.727883 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" event={"ID":"a34b87df-8978-4e2d-9875-a6b81a09fa84","Type":"ContainerDied","Data":"b4197288cd21de4e055a7299affb1cb43a53991838539d7286331173ba92c743"} Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.730017 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"49a63fb4-24bc-4834-b6e7-937688c5de09","Type":"ContainerStarted","Data":"ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21"} Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.735663 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dvktv" event={"ID":"4ba289fc-17e9-45e9-ac24-434d69045d97","Type":"ContainerStarted","Data":"82c70d68aab65fc3db72ee184a048732b17b72a09f49232810d0c430a261f1e7"} Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.739548 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f28f211b-be26-4f15-92a1-36b91cb53bbb","Type":"ContainerStarted","Data":"4583c4efc80289000d7023b793d84dc55442d51907c690cc558e927738cb2e88"} Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.742000 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ggjtl" event={"ID":"8a337463-8b7e-496b-9a01-fc491120c21d","Type":"ContainerStarted","Data":"324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9"} Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.775842 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" podStartSLOduration=11.775808112 podStartE2EDuration="11.775808112s" podCreationTimestamp="2026-01-30 05:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:24:45.755846282 +0000 UTC m=+1021.125756579" watchObservedRunningTime="2026-01-30 05:24:45.775808112 +0000 UTC m=+1021.145718399" Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.791091 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dvktv" podStartSLOduration=9.907045496 podStartE2EDuration="11.79105719s" podCreationTimestamp="2026-01-30 05:24:34 +0000 UTC" firstStartedPulling="2026-01-30 05:24:42.764034314 +0000 UTC m=+1018.133944571" lastFinishedPulling="2026-01-30 05:24:44.648046008 +0000 UTC m=+1020.017956265" observedRunningTime="2026-01-30 05:24:45.790099023 +0000 UTC m=+1021.160009290" watchObservedRunningTime="2026-01-30 05:24:45.79105719 +0000 UTC m=+1021.160967457" Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.908220 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ggjtl" podStartSLOduration=7.61731316 podStartE2EDuration="17.908202247s" podCreationTimestamp="2026-01-30 05:24:28 +0000 UTC" firstStartedPulling="2026-01-30 05:24:32.956861429 +0000 UTC m=+1008.326771686" lastFinishedPulling="2026-01-30 05:24:43.247750486 +0000 UTC m=+1018.617660773" observedRunningTime="2026-01-30 05:24:45.905180703 +0000 UTC m=+1021.275090980" watchObservedRunningTime="2026-01-30 05:24:45.908202247 +0000 UTC m=+1021.278112504" Jan 30 05:24:45 crc kubenswrapper[4931]: I0130 05:24:45.927806 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.025953387 podStartE2EDuration="14.927789897s" podCreationTimestamp="2026-01-30 05:24:31 +0000 UTC" firstStartedPulling="2026-01-30 05:24:33.741434554 +0000 UTC m=+1009.111344811" lastFinishedPulling="2026-01-30 05:24:44.643271064 +0000 UTC m=+1020.013181321" observedRunningTime="2026-01-30 05:24:45.924722691 +0000 UTC m=+1021.294632948" watchObservedRunningTime="2026-01-30 05:24:45.927789897 +0000 UTC m=+1021.297700154" Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.756417 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fc3f4796-66b1-452b-afca-5e62cbf2a53b","Type":"ContainerStarted","Data":"8bdcfbd624616a917de046867a4b176539e978b80dd1b9fad737dfdab9cb1bce"} Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.760364 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" event={"ID":"a34b87df-8978-4e2d-9875-a6b81a09fa84","Type":"ContainerStarted","Data":"9d327fe89d71a738231f8f91b804639f88053bf30faeeb4596482e0db97b8f02"} Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.760546 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.764026 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"49a63fb4-24bc-4834-b6e7-937688c5de09","Type":"ContainerStarted","Data":"36f1f59d90f7e1367de837bd2375e2c11d0df21e5687e4d77b474faff3e8df0b"} Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.768611 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-thxc2" event={"ID":"5732e34e-6330-4a36-9082-dbb50eede9f2","Type":"ContainerStarted","Data":"52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17"} Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.768683 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-thxc2" event={"ID":"5732e34e-6330-4a36-9082-dbb50eede9f2","Type":"ContainerStarted","Data":"ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1"} Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.769466 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ggjtl" Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.842074 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" podStartSLOduration=12.8420398 podStartE2EDuration="12.8420398s" podCreationTimestamp="2026-01-30 05:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:24:46.836868614 +0000 UTC m=+1022.206778911" watchObservedRunningTime="2026-01-30 05:24:46.8420398 +0000 UTC m=+1022.211950097" Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.881401 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-thxc2" podStartSLOduration=10.690363848 podStartE2EDuration="18.881361503s" podCreationTimestamp="2026-01-30 05:24:28 +0000 UTC" firstStartedPulling="2026-01-30 05:24:33.067515874 +0000 UTC m=+1008.437426131" lastFinishedPulling="2026-01-30 05:24:41.258513529 +0000 UTC m=+1016.628423786" observedRunningTime="2026-01-30 05:24:46.863398129 +0000 UTC m=+1022.233308426" watchObservedRunningTime="2026-01-30 05:24:46.881361503 +0000 UTC m=+1022.251271800" Jan 30 05:24:46 crc kubenswrapper[4931]: I0130 05:24:46.907952 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.304373317 podStartE2EDuration="18.907914858s" podCreationTimestamp="2026-01-30 05:24:28 +0000 UTC" firstStartedPulling="2026-01-30 05:24:34.652475247 +0000 UTC m=+1010.022385504" lastFinishedPulling="2026-01-30 05:24:43.256016788 +0000 UTC m=+1018.625927045" observedRunningTime="2026-01-30 05:24:46.904185383 +0000 UTC m=+1022.274095700" watchObservedRunningTime="2026-01-30 05:24:46.907914858 +0000 UTC m=+1022.277825165" Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.780218 4931 generic.go:334] "Generic (PLEG): container finished" podID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerID="8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07" exitCode=0 Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.780346 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"348ffd7a-9b7f-40aa-ada9-145a3a783d09","Type":"ContainerDied","Data":"8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07"} Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.785777 4931 generic.go:334] "Generic (PLEG): container finished" podID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerID="c1a9c2221cdd695e864a018795817226f2731d9a93a0ec46938ca160bd878ce0" exitCode=0 Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.785922 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef","Type":"ContainerDied","Data":"c1a9c2221cdd695e864a018795817226f2731d9a93a0ec46938ca160bd878ce0"} Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.787190 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.787460 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.810172 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:47 crc kubenswrapper[4931]: I0130 05:24:47.908511 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.118799 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.119068 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.187638 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.479262 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.794900 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"348ffd7a-9b7f-40aa-ada9-145a3a783d09","Type":"ContainerStarted","Data":"2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257"} Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.797302 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef","Type":"ContainerStarted","Data":"1c39e215a63df32503630d9061f8096755ed069b7c32eec93d18140193ca977b"} Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.797810 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.834243 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.497309281 podStartE2EDuration="28.83421763s" podCreationTimestamp="2026-01-30 05:24:20 +0000 UTC" firstStartedPulling="2026-01-30 05:24:32.122159468 +0000 UTC m=+1007.492069725" lastFinishedPulling="2026-01-30 05:24:41.459067817 +0000 UTC m=+1016.828978074" observedRunningTime="2026-01-30 05:24:48.821644257 +0000 UTC m=+1024.191554514" watchObservedRunningTime="2026-01-30 05:24:48.83421763 +0000 UTC m=+1024.204127897" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.853312 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.035221224 podStartE2EDuration="27.853289685s" podCreationTimestamp="2026-01-30 05:24:21 +0000 UTC" firstStartedPulling="2026-01-30 05:24:33.001679446 +0000 UTC m=+1008.371589703" lastFinishedPulling="2026-01-30 05:24:41.819747917 +0000 UTC m=+1017.189658164" observedRunningTime="2026-01-30 05:24:48.849530309 +0000 UTC m=+1024.219440616" watchObservedRunningTime="2026-01-30 05:24:48.853289685 +0000 UTC m=+1024.223199952" Jan 30 05:24:48 crc kubenswrapper[4931]: I0130 05:24:48.860833 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 05:24:49 crc kubenswrapper[4931]: I0130 05:24:49.867788 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.045718 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 05:24:50 crc kubenswrapper[4931]: E0130 05:24:50.046002 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerName="init" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.046019 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerName="init" Jan 30 05:24:50 crc kubenswrapper[4931]: E0130 05:24:50.046042 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ff6901-d6c9-467a-a4d2-35ddb8050570" containerName="init" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.046048 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ff6901-d6c9-467a-a4d2-35ddb8050570" containerName="init" Jan 30 05:24:50 crc kubenswrapper[4931]: E0130 05:24:50.046077 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerName="dnsmasq-dns" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.046083 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerName="dnsmasq-dns" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.046215 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ff6901-d6c9-467a-a4d2-35ddb8050570" containerName="init" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.046231 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7358be48-7c82-45bd-8165-0c02dcdb3666" containerName="dnsmasq-dns" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.046950 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.048895 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.049039 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7t2b7" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.049125 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.049409 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.066684 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.167790 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.168019 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.168067 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.168134 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-scripts\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.168161 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-config\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.168276 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb59q\" (UniqueName: \"kubernetes.io/projected/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-kube-api-access-qb59q\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.168325 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.269760 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.269807 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.269835 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-scripts\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.269852 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-config\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.269877 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb59q\" (UniqueName: \"kubernetes.io/projected/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-kube-api-access-qb59q\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.269895 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.270203 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.270287 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.270834 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-config\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.271306 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-scripts\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.275603 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.275722 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.278069 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.289660 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb59q\" (UniqueName: \"kubernetes.io/projected/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-kube-api-access-qb59q\") pod \"ovn-northd-0\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.406393 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 05:24:50 crc kubenswrapper[4931]: E0130 05:24:50.719518 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:36928->38.102.83.179:45103: write tcp 38.102.83.179:36928->38.102.83.179:45103: write: broken pipe Jan 30 05:24:50 crc kubenswrapper[4931]: I0130 05:24:50.896847 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 05:24:51 crc kubenswrapper[4931]: I0130 05:24:51.715393 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 05:24:51 crc kubenswrapper[4931]: I0130 05:24:51.715939 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 05:24:51 crc kubenswrapper[4931]: I0130 05:24:51.840122 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91","Type":"ContainerStarted","Data":"14085ad5b1fb30e2e472a98f1ef3cb304ab6fa42857d4a5f5e235f581937f71b"} Jan 30 05:24:52 crc kubenswrapper[4931]: I0130 05:24:52.851621 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91","Type":"ContainerStarted","Data":"dea51d6ee685a2470eaa0864347990ea744cedc85d00846568c67d56ba221ee1"} Jan 30 05:24:52 crc kubenswrapper[4931]: I0130 05:24:52.852140 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91","Type":"ContainerStarted","Data":"cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949"} Jan 30 05:24:52 crc kubenswrapper[4931]: I0130 05:24:52.852165 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 05:24:52 crc kubenswrapper[4931]: I0130 05:24:52.890749 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.909024077 podStartE2EDuration="2.890717783s" podCreationTimestamp="2026-01-30 05:24:50 +0000 UTC" firstStartedPulling="2026-01-30 05:24:50.926175558 +0000 UTC m=+1026.296085825" lastFinishedPulling="2026-01-30 05:24:51.907869234 +0000 UTC m=+1027.277779531" observedRunningTime="2026-01-30 05:24:52.874717464 +0000 UTC m=+1028.244627761" watchObservedRunningTime="2026-01-30 05:24:52.890717783 +0000 UTC m=+1028.260628080" Jan 30 05:24:53 crc kubenswrapper[4931]: I0130 05:24:53.162184 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:53 crc kubenswrapper[4931]: I0130 05:24:53.162262 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:53 crc kubenswrapper[4931]: I0130 05:24:53.285832 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:53 crc kubenswrapper[4931]: I0130 05:24:53.965954 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 05:24:54 crc kubenswrapper[4931]: I0130 05:24:54.694680 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:54 crc kubenswrapper[4931]: I0130 05:24:54.811640 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:24:54 crc kubenswrapper[4931]: I0130 05:24:54.886991 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-bzjkd"] Jan 30 05:24:54 crc kubenswrapper[4931]: I0130 05:24:54.896243 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" podUID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerName="dnsmasq-dns" containerID="cri-o://9d327fe89d71a738231f8f91b804639f88053bf30faeeb4596482e0db97b8f02" gracePeriod=10 Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.271645 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-5hp4b"] Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.272842 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.288564 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-5hp4b"] Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.366895 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.366966 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.367012 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.367034 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-config\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.367056 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pjqc\" (UniqueName: \"kubernetes.io/projected/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-kube-api-access-4pjqc\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.387529 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.468580 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-config\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.468626 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pjqc\" (UniqueName: \"kubernetes.io/projected/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-kube-api-access-4pjqc\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.468695 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.468735 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.468784 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.469514 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-config\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.469563 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.469641 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.469757 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.489096 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pjqc\" (UniqueName: \"kubernetes.io/projected/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-kube-api-access-4pjqc\") pod \"dnsmasq-dns-6cb545bd4c-5hp4b\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.590432 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.820918 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-5hp4b"] Jan 30 05:24:55 crc kubenswrapper[4931]: W0130 05:24:55.823146 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb20eb5e_4f22_4088_98dc_44eaf5ac5958.slice/crio-683aec62918f40b319d4b21f6811c3625fb69dac85c6d8a9170f3b1e7160bffa WatchSource:0}: Error finding container 683aec62918f40b319d4b21f6811c3625fb69dac85c6d8a9170f3b1e7160bffa: Status 404 returned error can't find the container with id 683aec62918f40b319d4b21f6811c3625fb69dac85c6d8a9170f3b1e7160bffa Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.906198 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" event={"ID":"eb20eb5e-4f22-4088-98dc-44eaf5ac5958","Type":"ContainerStarted","Data":"683aec62918f40b319d4b21f6811c3625fb69dac85c6d8a9170f3b1e7160bffa"} Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.908927 4931 generic.go:334] "Generic (PLEG): container finished" podID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerID="9d327fe89d71a738231f8f91b804639f88053bf30faeeb4596482e0db97b8f02" exitCode=0 Jan 30 05:24:55 crc kubenswrapper[4931]: I0130 05:24:55.908973 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" event={"ID":"a34b87df-8978-4e2d-9875-a6b81a09fa84","Type":"ContainerDied","Data":"9d327fe89d71a738231f8f91b804639f88053bf30faeeb4596482e0db97b8f02"} Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.224075 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.303701 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.384130 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.430055 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.430342 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.440145 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.440283 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.440539 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.440673 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-nmd2f" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.484080 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.484138 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-cache\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.484168 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52577244-c181-4919-b5b0-040e229163db-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.484241 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-lock\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.484297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.484344 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56w5l\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-kube-api-access-56w5l\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.586216 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.586291 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-cache\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.586335 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52577244-c181-4919-b5b0-040e229163db-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.586440 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-lock\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: E0130 05:24:56.586462 4931 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 05:24:56 crc kubenswrapper[4931]: E0130 05:24:56.586492 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.586525 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: E0130 05:24:56.586553 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift podName:52577244-c181-4919-b5b0-040e229163db nodeName:}" failed. No retries permitted until 2026-01-30 05:24:57.086530636 +0000 UTC m=+1032.456440903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift") pod "swift-storage-0" (UID: "52577244-c181-4919-b5b0-040e229163db") : configmap "swift-ring-files" not found Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.586575 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56w5l\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-kube-api-access-56w5l\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.587051 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.587205 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-lock\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.588007 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-cache\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.593617 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52577244-c181-4919-b5b0-040e229163db-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.611414 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56w5l\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-kube-api-access-56w5l\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.631802 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.879930 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bcdcb"] Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.881223 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.886250 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.886938 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.888446 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.909455 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bcdcb"] Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.993091 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-dispersionconf\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.993147 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-ring-data-devices\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.993171 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9b9ebe73-0201-4486-9de9-e8828e84de53-etc-swift\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.993217 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-combined-ca-bundle\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.993493 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-scripts\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.993556 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-swiftconf\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:56 crc kubenswrapper[4931]: I0130 05:24:56.993596 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88xkk\" (UniqueName: \"kubernetes.io/projected/9b9ebe73-0201-4486-9de9-e8828e84de53-kube-api-access-88xkk\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094623 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-dispersionconf\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094688 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-ring-data-devices\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094711 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9b9ebe73-0201-4486-9de9-e8828e84de53-etc-swift\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094749 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-combined-ca-bundle\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094784 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094815 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-scripts\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094839 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-swiftconf\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.094859 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88xkk\" (UniqueName: \"kubernetes.io/projected/9b9ebe73-0201-4486-9de9-e8828e84de53-kube-api-access-88xkk\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.095255 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9b9ebe73-0201-4486-9de9-e8828e84de53-etc-swift\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: E0130 05:24:57.095284 4931 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 05:24:57 crc kubenswrapper[4931]: E0130 05:24:57.095308 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.095675 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-scripts\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.095785 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-ring-data-devices\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: E0130 05:24:57.095848 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift podName:52577244-c181-4919-b5b0-040e229163db nodeName:}" failed. No retries permitted until 2026-01-30 05:24:58.095833577 +0000 UTC m=+1033.465743834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift") pod "swift-storage-0" (UID: "52577244-c181-4919-b5b0-040e229163db") : configmap "swift-ring-files" not found Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.098229 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-dispersionconf\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.098716 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-swiftconf\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.099471 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-combined-ca-bundle\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.122142 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88xkk\" (UniqueName: \"kubernetes.io/projected/9b9ebe73-0201-4486-9de9-e8828e84de53-kube-api-access-88xkk\") pod \"swift-ring-rebalance-bcdcb\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.207240 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.759562 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bcdcb"] Jan 30 05:24:57 crc kubenswrapper[4931]: W0130 05:24:57.763624 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9ebe73_0201_4486_9de9_e8828e84de53.slice/crio-e0aeb9238b4a2a03f01dbc9b575b6093a31f0d939b0c8c92582fa2fe6528a320 WatchSource:0}: Error finding container e0aeb9238b4a2a03f01dbc9b575b6093a31f0d939b0c8c92582fa2fe6528a320: Status 404 returned error can't find the container with id e0aeb9238b4a2a03f01dbc9b575b6093a31f0d939b0c8c92582fa2fe6528a320 Jan 30 05:24:57 crc kubenswrapper[4931]: I0130 05:24:57.924195 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bcdcb" event={"ID":"9b9ebe73-0201-4486-9de9-e8828e84de53","Type":"ContainerStarted","Data":"e0aeb9238b4a2a03f01dbc9b575b6093a31f0d939b0c8c92582fa2fe6528a320"} Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.114385 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:24:58 crc kubenswrapper[4931]: E0130 05:24:58.114622 4931 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 05:24:58 crc kubenswrapper[4931]: E0130 05:24:58.114797 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 05:24:58 crc kubenswrapper[4931]: E0130 05:24:58.114857 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift podName:52577244-c181-4919-b5b0-040e229163db nodeName:}" failed. No retries permitted until 2026-01-30 05:25:00.114840489 +0000 UTC m=+1035.484750746 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift") pod "swift-storage-0" (UID: "52577244-c181-4919-b5b0-040e229163db") : configmap "swift-ring-files" not found Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.748785 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-df05-account-create-update-xmzpk"] Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.750027 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.752270 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.780319 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-df05-account-create-update-xmzpk"] Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.793168 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-9z9pd"] Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.794623 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.802402 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9z9pd"] Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.826182 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69lh\" (UniqueName: \"kubernetes.io/projected/bf1b1f6c-2147-48f7-87ea-e64672036831-kube-api-access-k69lh\") pod \"glance-df05-account-create-update-xmzpk\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.826232 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvbc\" (UniqueName: \"kubernetes.io/projected/6ee75b9c-df74-490e-94ff-21eacce0b65a-kube-api-access-snvbc\") pod \"glance-db-create-9z9pd\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.826286 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee75b9c-df74-490e-94ff-21eacce0b65a-operator-scripts\") pod \"glance-db-create-9z9pd\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.826321 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1b1f6c-2147-48f7-87ea-e64672036831-operator-scripts\") pod \"glance-df05-account-create-update-xmzpk\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.932624 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1b1f6c-2147-48f7-87ea-e64672036831-operator-scripts\") pod \"glance-df05-account-create-update-xmzpk\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.932769 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69lh\" (UniqueName: \"kubernetes.io/projected/bf1b1f6c-2147-48f7-87ea-e64672036831-kube-api-access-k69lh\") pod \"glance-df05-account-create-update-xmzpk\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.932791 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snvbc\" (UniqueName: \"kubernetes.io/projected/6ee75b9c-df74-490e-94ff-21eacce0b65a-kube-api-access-snvbc\") pod \"glance-db-create-9z9pd\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.932826 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee75b9c-df74-490e-94ff-21eacce0b65a-operator-scripts\") pod \"glance-db-create-9z9pd\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.933922 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1b1f6c-2147-48f7-87ea-e64672036831-operator-scripts\") pod \"glance-df05-account-create-update-xmzpk\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.934851 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee75b9c-df74-490e-94ff-21eacce0b65a-operator-scripts\") pod \"glance-db-create-9z9pd\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.960254 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snvbc\" (UniqueName: \"kubernetes.io/projected/6ee75b9c-df74-490e-94ff-21eacce0b65a-kube-api-access-snvbc\") pod \"glance-db-create-9z9pd\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:58 crc kubenswrapper[4931]: I0130 05:24:58.962013 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69lh\" (UniqueName: \"kubernetes.io/projected/bf1b1f6c-2147-48f7-87ea-e64672036831-kube-api-access-k69lh\") pod \"glance-df05-account-create-update-xmzpk\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.077187 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.120925 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9z9pd" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.314669 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.445005 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv89g\" (UniqueName: \"kubernetes.io/projected/a34b87df-8978-4e2d-9875-a6b81a09fa84-kube-api-access-xv89g\") pod \"a34b87df-8978-4e2d-9875-a6b81a09fa84\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.445387 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-config\") pod \"a34b87df-8978-4e2d-9875-a6b81a09fa84\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.445457 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-ovsdbserver-sb\") pod \"a34b87df-8978-4e2d-9875-a6b81a09fa84\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.445540 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-dns-svc\") pod \"a34b87df-8978-4e2d-9875-a6b81a09fa84\" (UID: \"a34b87df-8978-4e2d-9875-a6b81a09fa84\") " Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.450161 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34b87df-8978-4e2d-9875-a6b81a09fa84-kube-api-access-xv89g" (OuterVolumeSpecName: "kube-api-access-xv89g") pod "a34b87df-8978-4e2d-9875-a6b81a09fa84" (UID: "a34b87df-8978-4e2d-9875-a6b81a09fa84"). InnerVolumeSpecName "kube-api-access-xv89g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.484747 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-config" (OuterVolumeSpecName: "config") pod "a34b87df-8978-4e2d-9875-a6b81a09fa84" (UID: "a34b87df-8978-4e2d-9875-a6b81a09fa84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.486890 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a34b87df-8978-4e2d-9875-a6b81a09fa84" (UID: "a34b87df-8978-4e2d-9875-a6b81a09fa84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.492120 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a34b87df-8978-4e2d-9875-a6b81a09fa84" (UID: "a34b87df-8978-4e2d-9875-a6b81a09fa84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.547186 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.547220 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv89g\" (UniqueName: \"kubernetes.io/projected/a34b87df-8978-4e2d-9875-a6b81a09fa84-kube-api-access-xv89g\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.547230 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.547238 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a34b87df-8978-4e2d-9875-a6b81a09fa84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.550940 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-df05-account-create-update-xmzpk"] Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.641492 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9z9pd"] Jan 30 05:24:59 crc kubenswrapper[4931]: W0130 05:24:59.650556 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ee75b9c_df74_490e_94ff_21eacce0b65a.slice/crio-9419c3c276f323b6a29f149a011311b1d434b4549c0a4bf9eff6ed75a0a8a514 WatchSource:0}: Error finding container 9419c3c276f323b6a29f149a011311b1d434b4549c0a4bf9eff6ed75a0a8a514: Status 404 returned error can't find the container with id 9419c3c276f323b6a29f149a011311b1d434b4549c0a4bf9eff6ed75a0a8a514 Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.946761 4931 generic.go:334] "Generic (PLEG): container finished" podID="bf1b1f6c-2147-48f7-87ea-e64672036831" containerID="b62af9a31208f4045d6ab5fc627a9d3f9b63bc460555779073074656653065f9" exitCode=0 Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.946840 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df05-account-create-update-xmzpk" event={"ID":"bf1b1f6c-2147-48f7-87ea-e64672036831","Type":"ContainerDied","Data":"b62af9a31208f4045d6ab5fc627a9d3f9b63bc460555779073074656653065f9"} Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.946871 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df05-account-create-update-xmzpk" event={"ID":"bf1b1f6c-2147-48f7-87ea-e64672036831","Type":"ContainerStarted","Data":"03bab058fa21df89a2e2e3b3b9b06339747851c18634d63060a8d6a53301dcfa"} Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.949021 4931 generic.go:334] "Generic (PLEG): container finished" podID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerID="5b51c3e6a6e67206beccccc2be017d2e75bb1a8386fa12f6af6b641475f06048" exitCode=0 Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.949355 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" event={"ID":"eb20eb5e-4f22-4088-98dc-44eaf5ac5958","Type":"ContainerDied","Data":"5b51c3e6a6e67206beccccc2be017d2e75bb1a8386fa12f6af6b641475f06048"} Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.964717 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" event={"ID":"a34b87df-8978-4e2d-9875-a6b81a09fa84","Type":"ContainerDied","Data":"d1724450fd332ef12f3dd92f8b86fd94720600e08c372a71b460370b61dc3ade"} Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.964972 4931 scope.go:117] "RemoveContainer" containerID="9d327fe89d71a738231f8f91b804639f88053bf30faeeb4596482e0db97b8f02" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.965073 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-bzjkd" Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.980357 4931 generic.go:334] "Generic (PLEG): container finished" podID="6ee75b9c-df74-490e-94ff-21eacce0b65a" containerID="b7fd522240b80788d80f7919145a4aa75ecf42cdb18b9fd6434f7a190f674261" exitCode=0 Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.980388 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9z9pd" event={"ID":"6ee75b9c-df74-490e-94ff-21eacce0b65a","Type":"ContainerDied","Data":"b7fd522240b80788d80f7919145a4aa75ecf42cdb18b9fd6434f7a190f674261"} Jan 30 05:24:59 crc kubenswrapper[4931]: I0130 05:24:59.980411 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9z9pd" event={"ID":"6ee75b9c-df74-490e-94ff-21eacce0b65a","Type":"ContainerStarted","Data":"9419c3c276f323b6a29f149a011311b1d434b4549c0a4bf9eff6ed75a0a8a514"} Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.159890 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:25:00 crc kubenswrapper[4931]: E0130 05:25:00.160121 4931 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 05:25:00 crc kubenswrapper[4931]: E0130 05:25:00.160149 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 05:25:00 crc kubenswrapper[4931]: E0130 05:25:00.160224 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift podName:52577244-c181-4919-b5b0-040e229163db nodeName:}" failed. No retries permitted until 2026-01-30 05:25:04.160200911 +0000 UTC m=+1039.530111168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift") pod "swift-storage-0" (UID: "52577244-c181-4919-b5b0-040e229163db") : configmap "swift-ring-files" not found Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.163857 4931 scope.go:117] "RemoveContainer" containerID="b4197288cd21de4e055a7299affb1cb43a53991838539d7286331173ba92c743" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.188462 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-bzjkd"] Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.197393 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-bzjkd"] Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.325383 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9w9jf"] Jan 30 05:25:00 crc kubenswrapper[4931]: E0130 05:25:00.326114 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerName="init" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.328270 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerName="init" Jan 30 05:25:00 crc kubenswrapper[4931]: E0130 05:25:00.328369 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerName="dnsmasq-dns" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.328460 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerName="dnsmasq-dns" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.328772 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34b87df-8978-4e2d-9875-a6b81a09fa84" containerName="dnsmasq-dns" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.329529 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.331834 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.355548 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9w9jf"] Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.465868 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9490636b-6e3e-48ea-85e7-3712196bc768-operator-scripts\") pod \"root-account-create-update-9w9jf\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.466003 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq2f9\" (UniqueName: \"kubernetes.io/projected/9490636b-6e3e-48ea-85e7-3712196bc768-kube-api-access-wq2f9\") pod \"root-account-create-update-9w9jf\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.567665 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9490636b-6e3e-48ea-85e7-3712196bc768-operator-scripts\") pod \"root-account-create-update-9w9jf\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.567789 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq2f9\" (UniqueName: \"kubernetes.io/projected/9490636b-6e3e-48ea-85e7-3712196bc768-kube-api-access-wq2f9\") pod \"root-account-create-update-9w9jf\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.568670 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9490636b-6e3e-48ea-85e7-3712196bc768-operator-scripts\") pod \"root-account-create-update-9w9jf\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.583885 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq2f9\" (UniqueName: \"kubernetes.io/projected/9490636b-6e3e-48ea-85e7-3712196bc768-kube-api-access-wq2f9\") pod \"root-account-create-update-9w9jf\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:00 crc kubenswrapper[4931]: I0130 05:25:00.645542 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.016917 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" event={"ID":"eb20eb5e-4f22-4088-98dc-44eaf5ac5958","Type":"ContainerStarted","Data":"48a476fc993377e732a41ec538be9f206289fff472ea517e4a1e5122eac3f5e4"} Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.017880 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.035585 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" podStartSLOduration=6.035568647 podStartE2EDuration="6.035568647s" podCreationTimestamp="2026-01-30 05:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:01.031516321 +0000 UTC m=+1036.401426598" watchObservedRunningTime="2026-01-30 05:25:01.035568647 +0000 UTC m=+1036.405478914" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.434146 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34b87df-8978-4e2d-9875-a6b81a09fa84" path="/var/lib/kubelet/pods/a34b87df-8978-4e2d-9875-a6b81a09fa84/volumes" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.561510 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.717989 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1b1f6c-2147-48f7-87ea-e64672036831-operator-scripts\") pod \"bf1b1f6c-2147-48f7-87ea-e64672036831\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.718180 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k69lh\" (UniqueName: \"kubernetes.io/projected/bf1b1f6c-2147-48f7-87ea-e64672036831-kube-api-access-k69lh\") pod \"bf1b1f6c-2147-48f7-87ea-e64672036831\" (UID: \"bf1b1f6c-2147-48f7-87ea-e64672036831\") " Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.719299 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1b1f6c-2147-48f7-87ea-e64672036831-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf1b1f6c-2147-48f7-87ea-e64672036831" (UID: "bf1b1f6c-2147-48f7-87ea-e64672036831"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.724409 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1b1f6c-2147-48f7-87ea-e64672036831-kube-api-access-k69lh" (OuterVolumeSpecName: "kube-api-access-k69lh") pod "bf1b1f6c-2147-48f7-87ea-e64672036831" (UID: "bf1b1f6c-2147-48f7-87ea-e64672036831"). InnerVolumeSpecName "kube-api-access-k69lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.819914 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k69lh\" (UniqueName: \"kubernetes.io/projected/bf1b1f6c-2147-48f7-87ea-e64672036831-kube-api-access-k69lh\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:01 crc kubenswrapper[4931]: I0130 05:25:01.819952 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1b1f6c-2147-48f7-87ea-e64672036831-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.033442 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-xmzpk" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.033924 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df05-account-create-update-xmzpk" event={"ID":"bf1b1f6c-2147-48f7-87ea-e64672036831","Type":"ContainerDied","Data":"03bab058fa21df89a2e2e3b3b9b06339747851c18634d63060a8d6a53301dcfa"} Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.033963 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03bab058fa21df89a2e2e3b3b9b06339747851c18634d63060a8d6a53301dcfa" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.718693 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9z9pd" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.841207 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snvbc\" (UniqueName: \"kubernetes.io/projected/6ee75b9c-df74-490e-94ff-21eacce0b65a-kube-api-access-snvbc\") pod \"6ee75b9c-df74-490e-94ff-21eacce0b65a\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.841403 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee75b9c-df74-490e-94ff-21eacce0b65a-operator-scripts\") pod \"6ee75b9c-df74-490e-94ff-21eacce0b65a\" (UID: \"6ee75b9c-df74-490e-94ff-21eacce0b65a\") " Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.842540 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee75b9c-df74-490e-94ff-21eacce0b65a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ee75b9c-df74-490e-94ff-21eacce0b65a" (UID: "6ee75b9c-df74-490e-94ff-21eacce0b65a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.848867 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee75b9c-df74-490e-94ff-21eacce0b65a-kube-api-access-snvbc" (OuterVolumeSpecName: "kube-api-access-snvbc") pod "6ee75b9c-df74-490e-94ff-21eacce0b65a" (UID: "6ee75b9c-df74-490e-94ff-21eacce0b65a"). InnerVolumeSpecName "kube-api-access-snvbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.943688 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ee75b9c-df74-490e-94ff-21eacce0b65a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.943953 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snvbc\" (UniqueName: \"kubernetes.io/projected/6ee75b9c-df74-490e-94ff-21eacce0b65a-kube-api-access-snvbc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:02 crc kubenswrapper[4931]: I0130 05:25:02.980642 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9w9jf"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.015804 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9bbdw"] Jan 30 05:25:03 crc kubenswrapper[4931]: E0130 05:25:03.016189 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee75b9c-df74-490e-94ff-21eacce0b65a" containerName="mariadb-database-create" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.016202 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee75b9c-df74-490e-94ff-21eacce0b65a" containerName="mariadb-database-create" Jan 30 05:25:03 crc kubenswrapper[4931]: E0130 05:25:03.016218 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1b1f6c-2147-48f7-87ea-e64672036831" containerName="mariadb-account-create-update" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.016224 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1b1f6c-2147-48f7-87ea-e64672036831" containerName="mariadb-account-create-update" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.016370 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1b1f6c-2147-48f7-87ea-e64672036831" containerName="mariadb-account-create-update" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.016387 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee75b9c-df74-490e-94ff-21eacce0b65a" containerName="mariadb-database-create" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.016917 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.027581 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9bbdw"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.041986 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bcdcb" event={"ID":"9b9ebe73-0201-4486-9de9-e8828e84de53","Type":"ContainerStarted","Data":"397e61b0c2ae3421e485ee4187f967d9302d0416b0227f42b0fd6c0769496dc7"} Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.043814 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9w9jf" event={"ID":"9490636b-6e3e-48ea-85e7-3712196bc768","Type":"ContainerStarted","Data":"96c690aab96e9e6dade39cf91865ee404be8f47488bd67014a7dbe9d3a7a4709"} Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.050367 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9z9pd" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.050406 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9z9pd" event={"ID":"6ee75b9c-df74-490e-94ff-21eacce0b65a","Type":"ContainerDied","Data":"9419c3c276f323b6a29f149a011311b1d434b4549c0a4bf9eff6ed75a0a8a514"} Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.050476 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9419c3c276f323b6a29f149a011311b1d434b4549c0a4bf9eff6ed75a0a8a514" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.066382 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bcdcb" podStartSLOduration=2.103062727 podStartE2EDuration="7.0663642s" podCreationTimestamp="2026-01-30 05:24:56 +0000 UTC" firstStartedPulling="2026-01-30 05:24:57.766203506 +0000 UTC m=+1033.136113763" lastFinishedPulling="2026-01-30 05:25:02.729504949 +0000 UTC m=+1038.099415236" observedRunningTime="2026-01-30 05:25:03.058975158 +0000 UTC m=+1038.428885415" watchObservedRunningTime="2026-01-30 05:25:03.0663642 +0000 UTC m=+1038.436274457" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.110246 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-595b-account-create-update-hcchn"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.111553 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.113356 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.116565 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-595b-account-create-update-hcchn"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.146703 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d338366-1ff1-4c95-aa94-30ba5c813138-operator-scripts\") pod \"keystone-db-create-9bbdw\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.146819 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmsd\" (UniqueName: \"kubernetes.io/projected/0d338366-1ff1-4c95-aa94-30ba5c813138-kube-api-access-cgmsd\") pod \"keystone-db-create-9bbdw\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.247710 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d338366-1ff1-4c95-aa94-30ba5c813138-operator-scripts\") pod \"keystone-db-create-9bbdw\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.248045 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmsd\" (UniqueName: \"kubernetes.io/projected/0d338366-1ff1-4c95-aa94-30ba5c813138-kube-api-access-cgmsd\") pod \"keystone-db-create-9bbdw\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.248144 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5722020-7619-4a17-8990-e025402e2c3a-operator-scripts\") pod \"keystone-595b-account-create-update-hcchn\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.248292 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxdl\" (UniqueName: \"kubernetes.io/projected/c5722020-7619-4a17-8990-e025402e2c3a-kube-api-access-vbxdl\") pod \"keystone-595b-account-create-update-hcchn\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.248394 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d338366-1ff1-4c95-aa94-30ba5c813138-operator-scripts\") pod \"keystone-db-create-9bbdw\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.262983 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmsd\" (UniqueName: \"kubernetes.io/projected/0d338366-1ff1-4c95-aa94-30ba5c813138-kube-api-access-cgmsd\") pod \"keystone-db-create-9bbdw\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.298952 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hqm5b"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.300080 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.311963 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqm5b"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.334045 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.349743 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxdl\" (UniqueName: \"kubernetes.io/projected/c5722020-7619-4a17-8990-e025402e2c3a-kube-api-access-vbxdl\") pod \"keystone-595b-account-create-update-hcchn\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.350079 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5722020-7619-4a17-8990-e025402e2c3a-operator-scripts\") pod \"keystone-595b-account-create-update-hcchn\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.350725 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5722020-7619-4a17-8990-e025402e2c3a-operator-scripts\") pod \"keystone-595b-account-create-update-hcchn\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.365039 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxdl\" (UniqueName: \"kubernetes.io/projected/c5722020-7619-4a17-8990-e025402e2c3a-kube-api-access-vbxdl\") pod \"keystone-595b-account-create-update-hcchn\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.436006 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a921-account-create-update-mqpxv"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.437493 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.439168 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.439860 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.457041 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7tv\" (UniqueName: \"kubernetes.io/projected/cae14e96-e869-491f-bbab-32bccf87cc10-kube-api-access-ph7tv\") pod \"placement-db-create-hqm5b\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.457158 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae14e96-e869-491f-bbab-32bccf87cc10-operator-scripts\") pod \"placement-db-create-hqm5b\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.458528 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a921-account-create-update-mqpxv"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.559093 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7tv\" (UniqueName: \"kubernetes.io/projected/cae14e96-e869-491f-bbab-32bccf87cc10-kube-api-access-ph7tv\") pod \"placement-db-create-hqm5b\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.559173 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-operator-scripts\") pod \"placement-a921-account-create-update-mqpxv\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.559222 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2m2x\" (UniqueName: \"kubernetes.io/projected/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-kube-api-access-l2m2x\") pod \"placement-a921-account-create-update-mqpxv\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.559254 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae14e96-e869-491f-bbab-32bccf87cc10-operator-scripts\") pod \"placement-db-create-hqm5b\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.561359 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae14e96-e869-491f-bbab-32bccf87cc10-operator-scripts\") pod \"placement-db-create-hqm5b\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.580187 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7tv\" (UniqueName: \"kubernetes.io/projected/cae14e96-e869-491f-bbab-32bccf87cc10-kube-api-access-ph7tv\") pod \"placement-db-create-hqm5b\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.614072 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.660392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-operator-scripts\") pod \"placement-a921-account-create-update-mqpxv\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.660472 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2m2x\" (UniqueName: \"kubernetes.io/projected/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-kube-api-access-l2m2x\") pod \"placement-a921-account-create-update-mqpxv\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.661395 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-operator-scripts\") pod \"placement-a921-account-create-update-mqpxv\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.683668 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2m2x\" (UniqueName: \"kubernetes.io/projected/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-kube-api-access-l2m2x\") pod \"placement-a921-account-create-update-mqpxv\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.760660 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.811275 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9bbdw"] Jan 30 05:25:03 crc kubenswrapper[4931]: I0130 05:25:03.888893 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-595b-account-create-update-hcchn"] Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.011602 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wxb94"] Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.012720 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.015081 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.015551 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lnq99" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.019662 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wxb94"] Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.062418 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hqm5b"] Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.069199 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9bbdw" event={"ID":"0d338366-1ff1-4c95-aa94-30ba5c813138","Type":"ContainerStarted","Data":"572f09b29f02dcc488a3f5a5c3037d927c6d17a2fec69ebb6da1e983b4bf5d1f"} Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.069255 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9bbdw" event={"ID":"0d338366-1ff1-4c95-aa94-30ba5c813138","Type":"ContainerStarted","Data":"79c07dc3658fb0f780ed178d88836e00752deac8a60e3cf4f66c4d5151cb9b1c"} Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.070397 4931 generic.go:334] "Generic (PLEG): container finished" podID="9490636b-6e3e-48ea-85e7-3712196bc768" containerID="0aa30d8d9eae66f63b97cadd6e1c8c0a9f5fe5356f82b3165d21d6b90e8f054f" exitCode=0 Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.070465 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9w9jf" event={"ID":"9490636b-6e3e-48ea-85e7-3712196bc768","Type":"ContainerDied","Data":"0aa30d8d9eae66f63b97cadd6e1c8c0a9f5fe5356f82b3165d21d6b90e8f054f"} Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.072671 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-595b-account-create-update-hcchn" event={"ID":"c5722020-7619-4a17-8990-e025402e2c3a","Type":"ContainerStarted","Data":"99d8e77ad688a72b40a0abbb974ba43df90e44bf29240bd1bb5c2d0a67083646"} Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.173695 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-config-data\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.173852 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpl8h\" (UniqueName: \"kubernetes.io/projected/08c65b18-0526-4eec-a608-20478c5eb008-kube-api-access-xpl8h\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.174007 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.174152 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-combined-ca-bundle\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.174233 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-db-sync-config-data\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: E0130 05:25:04.174696 4931 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 05:25:04 crc kubenswrapper[4931]: E0130 05:25:04.174722 4931 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 05:25:04 crc kubenswrapper[4931]: E0130 05:25:04.174766 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift podName:52577244-c181-4919-b5b0-040e229163db nodeName:}" failed. No retries permitted until 2026-01-30 05:25:12.174749168 +0000 UTC m=+1047.544659425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift") pod "swift-storage-0" (UID: "52577244-c181-4919-b5b0-040e229163db") : configmap "swift-ring-files" not found Jan 30 05:25:04 crc kubenswrapper[4931]: W0130 05:25:04.239581 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda1ef5f2_7d57_4f89_9b48_9c603b322e5e.slice/crio-c4ee527560bb58f58bc3c41da98915d6f9a864bb846ed3e7aaa30a35f39dbc90 WatchSource:0}: Error finding container c4ee527560bb58f58bc3c41da98915d6f9a864bb846ed3e7aaa30a35f39dbc90: Status 404 returned error can't find the container with id c4ee527560bb58f58bc3c41da98915d6f9a864bb846ed3e7aaa30a35f39dbc90 Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.240857 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a921-account-create-update-mqpxv"] Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.275670 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-combined-ca-bundle\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.275726 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-db-sync-config-data\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.275783 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-config-data\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.275822 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpl8h\" (UniqueName: \"kubernetes.io/projected/08c65b18-0526-4eec-a608-20478c5eb008-kube-api-access-xpl8h\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.282839 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-combined-ca-bundle\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.287168 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-db-sync-config-data\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.289399 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-config-data\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.302164 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpl8h\" (UniqueName: \"kubernetes.io/projected/08c65b18-0526-4eec-a608-20478c5eb008-kube-api-access-xpl8h\") pod \"glance-db-sync-wxb94\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.326031 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:04 crc kubenswrapper[4931]: I0130 05:25:04.921794 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wxb94"] Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.090074 4931 generic.go:334] "Generic (PLEG): container finished" podID="0d338366-1ff1-4c95-aa94-30ba5c813138" containerID="572f09b29f02dcc488a3f5a5c3037d927c6d17a2fec69ebb6da1e983b4bf5d1f" exitCode=0 Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.090158 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9bbdw" event={"ID":"0d338366-1ff1-4c95-aa94-30ba5c813138","Type":"ContainerDied","Data":"572f09b29f02dcc488a3f5a5c3037d927c6d17a2fec69ebb6da1e983b4bf5d1f"} Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.092179 4931 generic.go:334] "Generic (PLEG): container finished" podID="cae14e96-e869-491f-bbab-32bccf87cc10" containerID="1140a7961d708d05c85bc33a569a12461dd710e3403faa5dc7621241292e7e99" exitCode=0 Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.092233 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqm5b" event={"ID":"cae14e96-e869-491f-bbab-32bccf87cc10","Type":"ContainerDied","Data":"1140a7961d708d05c85bc33a569a12461dd710e3403faa5dc7621241292e7e99"} Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.092292 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqm5b" event={"ID":"cae14e96-e869-491f-bbab-32bccf87cc10","Type":"ContainerStarted","Data":"2dca52403afafa858d8c38ec0a9e5cde23ae060bb8c4aa75b1a7b8fb8ca506d0"} Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.093798 4931 generic.go:334] "Generic (PLEG): container finished" podID="c5722020-7619-4a17-8990-e025402e2c3a" containerID="424ff0eefa4783d3488bc19f3934cfec69b31ed4d156eca267b961eb0d363be6" exitCode=0 Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.093839 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-595b-account-create-update-hcchn" event={"ID":"c5722020-7619-4a17-8990-e025402e2c3a","Type":"ContainerDied","Data":"424ff0eefa4783d3488bc19f3934cfec69b31ed4d156eca267b961eb0d363be6"} Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.096053 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxb94" event={"ID":"08c65b18-0526-4eec-a608-20478c5eb008","Type":"ContainerStarted","Data":"1dde39fd71deaa1577ea5797017a140ffe24ad73bc61b3566927cd1bee60c4f1"} Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.097455 4931 generic.go:334] "Generic (PLEG): container finished" podID="da1ef5f2-7d57-4f89-9b48-9c603b322e5e" containerID="72e98c8676f758af58c2fffef7c54cd9bedf5ae4210e865b9220280e84a05578" exitCode=0 Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.097518 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a921-account-create-update-mqpxv" event={"ID":"da1ef5f2-7d57-4f89-9b48-9c603b322e5e","Type":"ContainerDied","Data":"72e98c8676f758af58c2fffef7c54cd9bedf5ae4210e865b9220280e84a05578"} Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.097587 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a921-account-create-update-mqpxv" event={"ID":"da1ef5f2-7d57-4f89-9b48-9c603b322e5e","Type":"ContainerStarted","Data":"c4ee527560bb58f58bc3c41da98915d6f9a864bb846ed3e7aaa30a35f39dbc90"} Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.500517 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.592675 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.619875 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9490636b-6e3e-48ea-85e7-3712196bc768-operator-scripts\") pod \"9490636b-6e3e-48ea-85e7-3712196bc768\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.619948 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq2f9\" (UniqueName: \"kubernetes.io/projected/9490636b-6e3e-48ea-85e7-3712196bc768-kube-api-access-wq2f9\") pod \"9490636b-6e3e-48ea-85e7-3712196bc768\" (UID: \"9490636b-6e3e-48ea-85e7-3712196bc768\") " Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.620736 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9490636b-6e3e-48ea-85e7-3712196bc768-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9490636b-6e3e-48ea-85e7-3712196bc768" (UID: "9490636b-6e3e-48ea-85e7-3712196bc768"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.665844 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9490636b-6e3e-48ea-85e7-3712196bc768-kube-api-access-wq2f9" (OuterVolumeSpecName: "kube-api-access-wq2f9") pod "9490636b-6e3e-48ea-85e7-3712196bc768" (UID: "9490636b-6e3e-48ea-85e7-3712196bc768"). InnerVolumeSpecName "kube-api-access-wq2f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.693750 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v6tmx"] Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.694158 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" podUID="c052a747-4d6e-459f-80c2-b690015e411d" containerName="dnsmasq-dns" containerID="cri-o://3eb355680179efbcbb2cf73e83f9b34f38755a348dc73a0a8db4b58a9c1de2f1" gracePeriod=10 Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.723103 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9490636b-6e3e-48ea-85e7-3712196bc768-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:05 crc kubenswrapper[4931]: I0130 05:25:05.723133 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq2f9\" (UniqueName: \"kubernetes.io/projected/9490636b-6e3e-48ea-85e7-3712196bc768-kube-api-access-wq2f9\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.108697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9w9jf" event={"ID":"9490636b-6e3e-48ea-85e7-3712196bc768","Type":"ContainerDied","Data":"96c690aab96e9e6dade39cf91865ee404be8f47488bd67014a7dbe9d3a7a4709"} Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.108743 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c690aab96e9e6dade39cf91865ee404be8f47488bd67014a7dbe9d3a7a4709" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.108809 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9w9jf" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.116550 4931 generic.go:334] "Generic (PLEG): container finished" podID="c052a747-4d6e-459f-80c2-b690015e411d" containerID="3eb355680179efbcbb2cf73e83f9b34f38755a348dc73a0a8db4b58a9c1de2f1" exitCode=0 Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.116776 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" event={"ID":"c052a747-4d6e-459f-80c2-b690015e411d","Type":"ContainerDied","Data":"3eb355680179efbcbb2cf73e83f9b34f38755a348dc73a0a8db4b58a9c1de2f1"} Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.116803 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" event={"ID":"c052a747-4d6e-459f-80c2-b690015e411d","Type":"ContainerDied","Data":"d8ae5c6c06a93c29197bfde41e6a215859930a15dc388d2269865aa48021ba9a"} Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.116815 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ae5c6c06a93c29197bfde41e6a215859930a15dc388d2269865aa48021ba9a" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.158337 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.344361 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-sb\") pod \"c052a747-4d6e-459f-80c2-b690015e411d\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.344460 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-config\") pod \"c052a747-4d6e-459f-80c2-b690015e411d\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.344527 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-nb\") pod \"c052a747-4d6e-459f-80c2-b690015e411d\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.344599 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpfk6\" (UniqueName: \"kubernetes.io/projected/c052a747-4d6e-459f-80c2-b690015e411d-kube-api-access-dpfk6\") pod \"c052a747-4d6e-459f-80c2-b690015e411d\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.344640 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-dns-svc\") pod \"c052a747-4d6e-459f-80c2-b690015e411d\" (UID: \"c052a747-4d6e-459f-80c2-b690015e411d\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.363465 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c052a747-4d6e-459f-80c2-b690015e411d-kube-api-access-dpfk6" (OuterVolumeSpecName: "kube-api-access-dpfk6") pod "c052a747-4d6e-459f-80c2-b690015e411d" (UID: "c052a747-4d6e-459f-80c2-b690015e411d"). InnerVolumeSpecName "kube-api-access-dpfk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.393641 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-config" (OuterVolumeSpecName: "config") pod "c052a747-4d6e-459f-80c2-b690015e411d" (UID: "c052a747-4d6e-459f-80c2-b690015e411d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.396272 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c052a747-4d6e-459f-80c2-b690015e411d" (UID: "c052a747-4d6e-459f-80c2-b690015e411d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.402233 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c052a747-4d6e-459f-80c2-b690015e411d" (UID: "c052a747-4d6e-459f-80c2-b690015e411d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.420987 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c052a747-4d6e-459f-80c2-b690015e411d" (UID: "c052a747-4d6e-459f-80c2-b690015e411d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.446620 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.446642 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.446650 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.446661 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpfk6\" (UniqueName: \"kubernetes.io/projected/c052a747-4d6e-459f-80c2-b690015e411d-kube-api-access-dpfk6\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.446671 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c052a747-4d6e-459f-80c2-b690015e411d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.506848 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.633209 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.651719 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmsd\" (UniqueName: \"kubernetes.io/projected/0d338366-1ff1-4c95-aa94-30ba5c813138-kube-api-access-cgmsd\") pod \"0d338366-1ff1-4c95-aa94-30ba5c813138\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.652224 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d338366-1ff1-4c95-aa94-30ba5c813138-operator-scripts\") pod \"0d338366-1ff1-4c95-aa94-30ba5c813138\" (UID: \"0d338366-1ff1-4c95-aa94-30ba5c813138\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.652997 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d338366-1ff1-4c95-aa94-30ba5c813138-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d338366-1ff1-4c95-aa94-30ba5c813138" (UID: "0d338366-1ff1-4c95-aa94-30ba5c813138"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.658314 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d338366-1ff1-4c95-aa94-30ba5c813138-kube-api-access-cgmsd" (OuterVolumeSpecName: "kube-api-access-cgmsd") pod "0d338366-1ff1-4c95-aa94-30ba5c813138" (UID: "0d338366-1ff1-4c95-aa94-30ba5c813138"). InnerVolumeSpecName "kube-api-access-cgmsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.662702 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.668227 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.753902 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae14e96-e869-491f-bbab-32bccf87cc10-operator-scripts\") pod \"cae14e96-e869-491f-bbab-32bccf87cc10\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.753988 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph7tv\" (UniqueName: \"kubernetes.io/projected/cae14e96-e869-491f-bbab-32bccf87cc10-kube-api-access-ph7tv\") pod \"cae14e96-e869-491f-bbab-32bccf87cc10\" (UID: \"cae14e96-e869-491f-bbab-32bccf87cc10\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.754287 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cae14e96-e869-491f-bbab-32bccf87cc10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cae14e96-e869-491f-bbab-32bccf87cc10" (UID: "cae14e96-e869-491f-bbab-32bccf87cc10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.754582 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae14e96-e869-491f-bbab-32bccf87cc10-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.754600 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmsd\" (UniqueName: \"kubernetes.io/projected/0d338366-1ff1-4c95-aa94-30ba5c813138-kube-api-access-cgmsd\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.754611 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d338366-1ff1-4c95-aa94-30ba5c813138-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.757642 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae14e96-e869-491f-bbab-32bccf87cc10-kube-api-access-ph7tv" (OuterVolumeSpecName: "kube-api-access-ph7tv") pod "cae14e96-e869-491f-bbab-32bccf87cc10" (UID: "cae14e96-e869-491f-bbab-32bccf87cc10"). InnerVolumeSpecName "kube-api-access-ph7tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.856701 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-operator-scripts\") pod \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.856783 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxdl\" (UniqueName: \"kubernetes.io/projected/c5722020-7619-4a17-8990-e025402e2c3a-kube-api-access-vbxdl\") pod \"c5722020-7619-4a17-8990-e025402e2c3a\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.857003 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2m2x\" (UniqueName: \"kubernetes.io/projected/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-kube-api-access-l2m2x\") pod \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\" (UID: \"da1ef5f2-7d57-4f89-9b48-9c603b322e5e\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.857068 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5722020-7619-4a17-8990-e025402e2c3a-operator-scripts\") pod \"c5722020-7619-4a17-8990-e025402e2c3a\" (UID: \"c5722020-7619-4a17-8990-e025402e2c3a\") " Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.857902 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5722020-7619-4a17-8990-e025402e2c3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5722020-7619-4a17-8990-e025402e2c3a" (UID: "c5722020-7619-4a17-8990-e025402e2c3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.858571 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5722020-7619-4a17-8990-e025402e2c3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.858602 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph7tv\" (UniqueName: \"kubernetes.io/projected/cae14e96-e869-491f-bbab-32bccf87cc10-kube-api-access-ph7tv\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.858849 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da1ef5f2-7d57-4f89-9b48-9c603b322e5e" (UID: "da1ef5f2-7d57-4f89-9b48-9c603b322e5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.862539 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5722020-7619-4a17-8990-e025402e2c3a-kube-api-access-vbxdl" (OuterVolumeSpecName: "kube-api-access-vbxdl") pod "c5722020-7619-4a17-8990-e025402e2c3a" (UID: "c5722020-7619-4a17-8990-e025402e2c3a"). InnerVolumeSpecName "kube-api-access-vbxdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.865974 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-kube-api-access-l2m2x" (OuterVolumeSpecName: "kube-api-access-l2m2x") pod "da1ef5f2-7d57-4f89-9b48-9c603b322e5e" (UID: "da1ef5f2-7d57-4f89-9b48-9c603b322e5e"). InnerVolumeSpecName "kube-api-access-l2m2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.882764 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9w9jf"] Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.889720 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9w9jf"] Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.959821 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxdl\" (UniqueName: \"kubernetes.io/projected/c5722020-7619-4a17-8990-e025402e2c3a-kube-api-access-vbxdl\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.959859 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2m2x\" (UniqueName: \"kubernetes.io/projected/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-kube-api-access-l2m2x\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:06 crc kubenswrapper[4931]: I0130 05:25:06.959871 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da1ef5f2-7d57-4f89-9b48-9c603b322e5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.126227 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hqm5b" event={"ID":"cae14e96-e869-491f-bbab-32bccf87cc10","Type":"ContainerDied","Data":"2dca52403afafa858d8c38ec0a9e5cde23ae060bb8c4aa75b1a7b8fb8ca506d0"} Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.126262 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dca52403afafa858d8c38ec0a9e5cde23ae060bb8c4aa75b1a7b8fb8ca506d0" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.126333 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hqm5b" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.127923 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-595b-account-create-update-hcchn" event={"ID":"c5722020-7619-4a17-8990-e025402e2c3a","Type":"ContainerDied","Data":"99d8e77ad688a72b40a0abbb974ba43df90e44bf29240bd1bb5c2d0a67083646"} Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.127959 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d8e77ad688a72b40a0abbb974ba43df90e44bf29240bd1bb5c2d0a67083646" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.128017 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-hcchn" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.139512 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a921-account-create-update-mqpxv" event={"ID":"da1ef5f2-7d57-4f89-9b48-9c603b322e5e","Type":"ContainerDied","Data":"c4ee527560bb58f58bc3c41da98915d6f9a864bb846ed3e7aaa30a35f39dbc90"} Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.139550 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ee527560bb58f58bc3c41da98915d6f9a864bb846ed3e7aaa30a35f39dbc90" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.139526 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a921-account-create-update-mqpxv" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.141884 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9bbdw" event={"ID":"0d338366-1ff1-4c95-aa94-30ba5c813138","Type":"ContainerDied","Data":"79c07dc3658fb0f780ed178d88836e00752deac8a60e3cf4f66c4d5151cb9b1c"} Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.141946 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c07dc3658fb0f780ed178d88836e00752deac8a60e3cf4f66c4d5151cb9b1c" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.141904 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-v6tmx" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.141998 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9bbdw" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.195938 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v6tmx"] Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.203893 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-v6tmx"] Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.430076 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9490636b-6e3e-48ea-85e7-3712196bc768" path="/var/lib/kubelet/pods/9490636b-6e3e-48ea-85e7-3712196bc768/volumes" Jan 30 05:25:07 crc kubenswrapper[4931]: I0130 05:25:07.430580 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c052a747-4d6e-459f-80c2-b690015e411d" path="/var/lib/kubelet/pods/c052a747-4d6e-459f-80c2-b690015e411d/volumes" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.167317 4931 generic.go:334] "Generic (PLEG): container finished" podID="9b9ebe73-0201-4486-9de9-e8828e84de53" containerID="397e61b0c2ae3421e485ee4187f967d9302d0416b0227f42b0fd6c0769496dc7" exitCode=0 Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.167466 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bcdcb" event={"ID":"9b9ebe73-0201-4486-9de9-e8828e84de53","Type":"ContainerDied","Data":"397e61b0c2ae3421e485ee4187f967d9302d0416b0227f42b0fd6c0769496dc7"} Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.340372 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qzj5h"] Jan 30 05:25:10 crc kubenswrapper[4931]: E0130 05:25:10.340871 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae14e96-e869-491f-bbab-32bccf87cc10" containerName="mariadb-database-create" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.340902 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae14e96-e869-491f-bbab-32bccf87cc10" containerName="mariadb-database-create" Jan 30 05:25:10 crc kubenswrapper[4931]: E0130 05:25:10.340932 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d338366-1ff1-4c95-aa94-30ba5c813138" containerName="mariadb-database-create" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.340944 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d338366-1ff1-4c95-aa94-30ba5c813138" containerName="mariadb-database-create" Jan 30 05:25:10 crc kubenswrapper[4931]: E0130 05:25:10.340969 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5722020-7619-4a17-8990-e025402e2c3a" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.340982 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5722020-7619-4a17-8990-e025402e2c3a" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: E0130 05:25:10.341003 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c052a747-4d6e-459f-80c2-b690015e411d" containerName="init" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341015 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c052a747-4d6e-459f-80c2-b690015e411d" containerName="init" Jan 30 05:25:10 crc kubenswrapper[4931]: E0130 05:25:10.341034 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9490636b-6e3e-48ea-85e7-3712196bc768" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341047 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9490636b-6e3e-48ea-85e7-3712196bc768" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: E0130 05:25:10.341061 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1ef5f2-7d57-4f89-9b48-9c603b322e5e" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341073 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1ef5f2-7d57-4f89-9b48-9c603b322e5e" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: E0130 05:25:10.341108 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c052a747-4d6e-459f-80c2-b690015e411d" containerName="dnsmasq-dns" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341120 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c052a747-4d6e-459f-80c2-b690015e411d" containerName="dnsmasq-dns" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341402 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9490636b-6e3e-48ea-85e7-3712196bc768" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341465 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d338366-1ff1-4c95-aa94-30ba5c813138" containerName="mariadb-database-create" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341490 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae14e96-e869-491f-bbab-32bccf87cc10" containerName="mariadb-database-create" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341507 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5722020-7619-4a17-8990-e025402e2c3a" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341526 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1ef5f2-7d57-4f89-9b48-9c603b322e5e" containerName="mariadb-account-create-update" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.341551 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c052a747-4d6e-459f-80c2-b690015e411d" containerName="dnsmasq-dns" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.342321 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.344493 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.348976 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qzj5h"] Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.458395 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.538160 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2q7\" (UniqueName: \"kubernetes.io/projected/602175a2-25e6-472d-b423-5ab4e6d97769-kube-api-access-qw2q7\") pod \"root-account-create-update-qzj5h\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.538829 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602175a2-25e6-472d-b423-5ab4e6d97769-operator-scripts\") pod \"root-account-create-update-qzj5h\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.640342 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602175a2-25e6-472d-b423-5ab4e6d97769-operator-scripts\") pod \"root-account-create-update-qzj5h\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.640573 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2q7\" (UniqueName: \"kubernetes.io/projected/602175a2-25e6-472d-b423-5ab4e6d97769-kube-api-access-qw2q7\") pod \"root-account-create-update-qzj5h\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.641402 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602175a2-25e6-472d-b423-5ab4e6d97769-operator-scripts\") pod \"root-account-create-update-qzj5h\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.667102 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2q7\" (UniqueName: \"kubernetes.io/projected/602175a2-25e6-472d-b423-5ab4e6d97769-kube-api-access-qw2q7\") pod \"root-account-create-update-qzj5h\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:10 crc kubenswrapper[4931]: I0130 05:25:10.677473 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:12 crc kubenswrapper[4931]: I0130 05:25:12.275817 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:25:12 crc kubenswrapper[4931]: I0130 05:25:12.282272 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"swift-storage-0\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " pod="openstack/swift-storage-0" Jan 30 05:25:12 crc kubenswrapper[4931]: I0130 05:25:12.358743 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.242317 4931 generic.go:334] "Generic (PLEG): container finished" podID="081e3873-ea99-4486-925f-784a98e49405" containerID="4db4fc560f1e0be65146b56bdc4340b3ff1c5a4fe7510a353795090f99291213" exitCode=0 Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.242390 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"081e3873-ea99-4486-925f-784a98e49405","Type":"ContainerDied","Data":"4db4fc560f1e0be65146b56bdc4340b3ff1c5a4fe7510a353795090f99291213"} Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.861559 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.974495 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-combined-ca-bundle\") pod \"9b9ebe73-0201-4486-9de9-e8828e84de53\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.974787 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-swiftconf\") pod \"9b9ebe73-0201-4486-9de9-e8828e84de53\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.974853 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-ring-data-devices\") pod \"9b9ebe73-0201-4486-9de9-e8828e84de53\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.974872 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9b9ebe73-0201-4486-9de9-e8828e84de53-etc-swift\") pod \"9b9ebe73-0201-4486-9de9-e8828e84de53\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.974942 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-scripts\") pod \"9b9ebe73-0201-4486-9de9-e8828e84de53\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.974964 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-dispersionconf\") pod \"9b9ebe73-0201-4486-9de9-e8828e84de53\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.975019 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88xkk\" (UniqueName: \"kubernetes.io/projected/9b9ebe73-0201-4486-9de9-e8828e84de53-kube-api-access-88xkk\") pod \"9b9ebe73-0201-4486-9de9-e8828e84de53\" (UID: \"9b9ebe73-0201-4486-9de9-e8828e84de53\") " Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.977398 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9b9ebe73-0201-4486-9de9-e8828e84de53" (UID: "9b9ebe73-0201-4486-9de9-e8828e84de53"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.977586 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b9ebe73-0201-4486-9de9-e8828e84de53-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9b9ebe73-0201-4486-9de9-e8828e84de53" (UID: "9b9ebe73-0201-4486-9de9-e8828e84de53"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.977622 4931 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.981244 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9ebe73-0201-4486-9de9-e8828e84de53-kube-api-access-88xkk" (OuterVolumeSpecName: "kube-api-access-88xkk") pod "9b9ebe73-0201-4486-9de9-e8828e84de53" (UID: "9b9ebe73-0201-4486-9de9-e8828e84de53"). InnerVolumeSpecName "kube-api-access-88xkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:17 crc kubenswrapper[4931]: I0130 05:25:17.989086 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9b9ebe73-0201-4486-9de9-e8828e84de53" (UID: "9b9ebe73-0201-4486-9de9-e8828e84de53"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.001414 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-scripts" (OuterVolumeSpecName: "scripts") pod "9b9ebe73-0201-4486-9de9-e8828e84de53" (UID: "9b9ebe73-0201-4486-9de9-e8828e84de53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.005411 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9b9ebe73-0201-4486-9de9-e8828e84de53" (UID: "9b9ebe73-0201-4486-9de9-e8828e84de53"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.026709 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b9ebe73-0201-4486-9de9-e8828e84de53" (UID: "9b9ebe73-0201-4486-9de9-e8828e84de53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.079836 4931 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9b9ebe73-0201-4486-9de9-e8828e84de53-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.079863 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9b9ebe73-0201-4486-9de9-e8828e84de53-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.079874 4931 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.079882 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88xkk\" (UniqueName: \"kubernetes.io/projected/9b9ebe73-0201-4486-9de9-e8828e84de53-kube-api-access-88xkk\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.079891 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.079899 4931 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9b9ebe73-0201-4486-9de9-e8828e84de53-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.253605 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bcdcb" event={"ID":"9b9ebe73-0201-4486-9de9-e8828e84de53","Type":"ContainerDied","Data":"e0aeb9238b4a2a03f01dbc9b575b6093a31f0d939b0c8c92582fa2fe6528a320"} Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.253648 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0aeb9238b4a2a03f01dbc9b575b6093a31f0d939b0c8c92582fa2fe6528a320" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.253701 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bcdcb" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.262847 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"081e3873-ea99-4486-925f-784a98e49405","Type":"ContainerStarted","Data":"1bb7e19530d33f5a4cf134ca5c6644743c868cc750cac0c1bb313f0f47240dd8"} Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.263124 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.303247 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.850274399 podStartE2EDuration="1m0.303227176s" podCreationTimestamp="2026-01-30 05:24:18 +0000 UTC" firstStartedPulling="2026-01-30 05:24:33.016760119 +0000 UTC m=+1008.386670376" lastFinishedPulling="2026-01-30 05:24:41.469712896 +0000 UTC m=+1016.839623153" observedRunningTime="2026-01-30 05:25:18.298319155 +0000 UTC m=+1053.668229432" watchObservedRunningTime="2026-01-30 05:25:18.303227176 +0000 UTC m=+1053.673137433" Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.347081 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qzj5h"] Jan 30 05:25:18 crc kubenswrapper[4931]: W0130 05:25:18.353645 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod602175a2_25e6_472d_b423_5ab4e6d97769.slice/crio-b27357bffaa312a9f30da66999e22bd02e43c53839aafa7ab73c9113f285a395 WatchSource:0}: Error finding container b27357bffaa312a9f30da66999e22bd02e43c53839aafa7ab73c9113f285a395: Status 404 returned error can't find the container with id b27357bffaa312a9f30da66999e22bd02e43c53839aafa7ab73c9113f285a395 Jan 30 05:25:18 crc kubenswrapper[4931]: I0130 05:25:18.414896 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.282672 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxb94" event={"ID":"08c65b18-0526-4eec-a608-20478c5eb008","Type":"ContainerStarted","Data":"342d9a5530d63ffbf73ce7c8e40d07fb36a0b7d11a82e475df103d0dfda95398"} Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.291348 4931 generic.go:334] "Generic (PLEG): container finished" podID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerID="8bdcfbd624616a917de046867a4b176539e978b80dd1b9fad737dfdab9cb1bce" exitCode=0 Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.291402 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fc3f4796-66b1-452b-afca-5e62cbf2a53b","Type":"ContainerDied","Data":"8bdcfbd624616a917de046867a4b176539e978b80dd1b9fad737dfdab9cb1bce"} Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.321091 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wxb94" podStartSLOduration=3.3785584379999998 podStartE2EDuration="16.321078138s" podCreationTimestamp="2026-01-30 05:25:03 +0000 UTC" firstStartedPulling="2026-01-30 05:25:04.92912182 +0000 UTC m=+1040.299032087" lastFinishedPulling="2026-01-30 05:25:17.87164153 +0000 UTC m=+1053.241551787" observedRunningTime="2026-01-30 05:25:19.320636665 +0000 UTC m=+1054.690546922" watchObservedRunningTime="2026-01-30 05:25:19.321078138 +0000 UTC m=+1054.690988395" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.322472 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"8f709bd92c7c6c28297de5f91b3d8f5726929abc3fede49c29940651ade456cb"} Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.361908 4931 generic.go:334] "Generic (PLEG): container finished" podID="602175a2-25e6-472d-b423-5ab4e6d97769" containerID="a64e91cbe33af673e6689e436885784e9c445a56b737d4748cfcdbf6fce27a53" exitCode=0 Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.363353 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qzj5h" event={"ID":"602175a2-25e6-472d-b423-5ab4e6d97769","Type":"ContainerDied","Data":"a64e91cbe33af673e6689e436885784e9c445a56b737d4748cfcdbf6fce27a53"} Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.363397 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qzj5h" event={"ID":"602175a2-25e6-472d-b423-5ab4e6d97769","Type":"ContainerStarted","Data":"b27357bffaa312a9f30da66999e22bd02e43c53839aafa7ab73c9113f285a395"} Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.401686 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.405802 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.436692 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ggjtl" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" probeResult="failure" output=< Jan 30 05:25:19 crc kubenswrapper[4931]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 05:25:19 crc kubenswrapper[4931]: > Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.621231 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ggjtl-config-57fgk"] Jan 30 05:25:19 crc kubenswrapper[4931]: E0130 05:25:19.621608 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9ebe73-0201-4486-9de9-e8828e84de53" containerName="swift-ring-rebalance" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.621626 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9ebe73-0201-4486-9de9-e8828e84de53" containerName="swift-ring-rebalance" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.621815 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9ebe73-0201-4486-9de9-e8828e84de53" containerName="swift-ring-rebalance" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.622416 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.626397 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.635341 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ggjtl-config-57fgk"] Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.746816 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-scripts\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.746893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-additional-scripts\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.746921 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.747016 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-log-ovn\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.747043 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2cqz\" (UniqueName: \"kubernetes.io/projected/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-kube-api-access-s2cqz\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.747067 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run-ovn\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.849707 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-log-ovn\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.849737 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2cqz\" (UniqueName: \"kubernetes.io/projected/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-kube-api-access-s2cqz\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.849764 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run-ovn\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.849805 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-scripts\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.849850 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-additional-scripts\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.849872 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.850001 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-log-ovn\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.850060 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.850070 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run-ovn\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.850806 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-additional-scripts\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.852008 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-scripts\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.867646 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2cqz\" (UniqueName: \"kubernetes.io/projected/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-kube-api-access-s2cqz\") pod \"ovn-controller-ggjtl-config-57fgk\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:19 crc kubenswrapper[4931]: I0130 05:25:19.954651 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.383123 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fc3f4796-66b1-452b-afca-5e62cbf2a53b","Type":"ContainerStarted","Data":"1c82fc5914a60be58942659c6c59b346ba961ba1c401d7f0c82d22447fc0b135"} Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.383614 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.385558 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"64945c3ef451f83b413801249e5cc8cedcb622a5a129ee80defec98e393eed29"} Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.385611 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"de7939eb8c76f478b34a03e7dd08f4a308dc8bbb63a287db0a5b3eec8794cc7c"} Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.385624 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"e78b14aff0684ab7de691a18e86ce169b9e67b8f02342d8eba7927de4cb39ec6"} Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.405627 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ggjtl-config-57fgk"] Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.438377 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.708201877 podStartE2EDuration="1m2.438357871s" podCreationTimestamp="2026-01-30 05:24:18 +0000 UTC" firstStartedPulling="2026-01-30 05:24:33.017768068 +0000 UTC m=+1008.387678325" lastFinishedPulling="2026-01-30 05:24:42.747924052 +0000 UTC m=+1018.117834319" observedRunningTime="2026-01-30 05:25:20.422041531 +0000 UTC m=+1055.791951788" watchObservedRunningTime="2026-01-30 05:25:20.438357871 +0000 UTC m=+1055.808268118" Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.671349 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.763144 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2q7\" (UniqueName: \"kubernetes.io/projected/602175a2-25e6-472d-b423-5ab4e6d97769-kube-api-access-qw2q7\") pod \"602175a2-25e6-472d-b423-5ab4e6d97769\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.763210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602175a2-25e6-472d-b423-5ab4e6d97769-operator-scripts\") pod \"602175a2-25e6-472d-b423-5ab4e6d97769\" (UID: \"602175a2-25e6-472d-b423-5ab4e6d97769\") " Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.763886 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602175a2-25e6-472d-b423-5ab4e6d97769-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "602175a2-25e6-472d-b423-5ab4e6d97769" (UID: "602175a2-25e6-472d-b423-5ab4e6d97769"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.770124 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602175a2-25e6-472d-b423-5ab4e6d97769-kube-api-access-qw2q7" (OuterVolumeSpecName: "kube-api-access-qw2q7") pod "602175a2-25e6-472d-b423-5ab4e6d97769" (UID: "602175a2-25e6-472d-b423-5ab4e6d97769"). InnerVolumeSpecName "kube-api-access-qw2q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.865513 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2q7\" (UniqueName: \"kubernetes.io/projected/602175a2-25e6-472d-b423-5ab4e6d97769-kube-api-access-qw2q7\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:20 crc kubenswrapper[4931]: I0130 05:25:20.865737 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/602175a2-25e6-472d-b423-5ab4e6d97769-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.398240 4931 generic.go:334] "Generic (PLEG): container finished" podID="063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" containerID="31798e9f13d46b8721aae715c1edfd7a01d30cecc4d59728bf20993fd26d459b" exitCode=0 Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.398348 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ggjtl-config-57fgk" event={"ID":"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b","Type":"ContainerDied","Data":"31798e9f13d46b8721aae715c1edfd7a01d30cecc4d59728bf20993fd26d459b"} Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.398443 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ggjtl-config-57fgk" event={"ID":"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b","Type":"ContainerStarted","Data":"5f16f48af4771a2f9b6d640da0d9b09a85cbb3309209ff0e5843e65b694d4ea0"} Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.399789 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qzj5h" event={"ID":"602175a2-25e6-472d-b423-5ab4e6d97769","Type":"ContainerDied","Data":"b27357bffaa312a9f30da66999e22bd02e43c53839aafa7ab73c9113f285a395"} Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.399824 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b27357bffaa312a9f30da66999e22bd02e43c53839aafa7ab73c9113f285a395" Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.399886 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qzj5h" Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.407065 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"9eac75907fb7af02eb9159bedaf64c4ca7dee04ca441b549c3a48132b186515f"} Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.915225 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qzj5h"] Jan 30 05:25:21 crc kubenswrapper[4931]: I0130 05:25:21.927053 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qzj5h"] Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.418184 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"6088dfc85b1e09a936dd16faccae994e80e2ccb29840c2c09302842b83328fc2"} Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.418447 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"072ac216076cb0fec2ec21789975f4b6fdf297b846d9774f980cb280a52a2718"} Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.418461 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"840bb675eae49d372214aa49017516eb7cc03feb5f0cebb6fb56a2dd4d0837b9"} Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.418469 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"01987d0b4f025a347544f55c09ea6cf4f3249f746d37222f35ee196eb4525b63"} Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.802305 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.895855 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run-ovn\") pod \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.895936 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-scripts\") pod \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.895962 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-additional-scripts\") pod \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896008 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" (UID: "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896138 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2cqz\" (UniqueName: \"kubernetes.io/projected/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-kube-api-access-s2cqz\") pod \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896209 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run\") pod \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896232 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-log-ovn\") pod \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\" (UID: \"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b\") " Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896306 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run" (OuterVolumeSpecName: "var-run") pod "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" (UID: "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896434 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" (UID: "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896639 4931 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896660 4931 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896670 4931 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896785 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" (UID: "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.896939 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-scripts" (OuterVolumeSpecName: "scripts") pod "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" (UID: "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.910696 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-kube-api-access-s2cqz" (OuterVolumeSpecName: "kube-api-access-s2cqz") pod "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" (UID: "063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b"). InnerVolumeSpecName "kube-api-access-s2cqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.998395 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2cqz\" (UniqueName: \"kubernetes.io/projected/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-kube-api-access-s2cqz\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.998451 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:22 crc kubenswrapper[4931]: I0130 05:25:22.998461 4931 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:23 crc kubenswrapper[4931]: I0130 05:25:23.434210 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl-config-57fgk" Jan 30 05:25:23 crc kubenswrapper[4931]: I0130 05:25:23.434953 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602175a2-25e6-472d-b423-5ab4e6d97769" path="/var/lib/kubelet/pods/602175a2-25e6-472d-b423-5ab4e6d97769/volumes" Jan 30 05:25:23 crc kubenswrapper[4931]: I0130 05:25:23.437864 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ggjtl-config-57fgk" event={"ID":"063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b","Type":"ContainerDied","Data":"5f16f48af4771a2f9b6d640da0d9b09a85cbb3309209ff0e5843e65b694d4ea0"} Jan 30 05:25:23 crc kubenswrapper[4931]: I0130 05:25:23.437899 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f16f48af4771a2f9b6d640da0d9b09a85cbb3309209ff0e5843e65b694d4ea0" Jan 30 05:25:23 crc kubenswrapper[4931]: I0130 05:25:23.920118 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ggjtl-config-57fgk"] Jan 30 05:25:23 crc kubenswrapper[4931]: I0130 05:25:23.931060 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ggjtl-config-57fgk"] Jan 30 05:25:24 crc kubenswrapper[4931]: I0130 05:25:24.401969 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ggjtl" Jan 30 05:25:24 crc kubenswrapper[4931]: I0130 05:25:24.451081 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"fa1a93081b269f4ada317cd8046ebb7f1a7c1edf1f6e97c13ed393eacd7e1973"} Jan 30 05:25:24 crc kubenswrapper[4931]: I0130 05:25:24.452058 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"cf71a5f4e5a8611b3edb8a350ba0e2eedbd78c0fb76770c94841152df4a3ab69"} Jan 30 05:25:24 crc kubenswrapper[4931]: I0130 05:25:24.452147 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"577bb47efe5f44d38e3c888fdf879028a229599eb11554a344d6a077afa58802"} Jan 30 05:25:24 crc kubenswrapper[4931]: I0130 05:25:24.452209 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"2aa8176e0269c78ed82e92b582f8a0a44311ad87daa1079e948a15315c72207f"} Jan 30 05:25:24 crc kubenswrapper[4931]: I0130 05:25:24.452267 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"cc9cceab2cf461936102038fbf1707f4be2e195decb5808193c7f51c4adb08d3"} Jan 30 05:25:24 crc kubenswrapper[4931]: I0130 05:25:24.452332 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"b4e57fcd32132c1d41dc41783803f5dfdbd53a0317437ca189732c5c62a33471"} Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.367429 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r9xdc"] Jan 30 05:25:25 crc kubenswrapper[4931]: E0130 05:25:25.367963 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602175a2-25e6-472d-b423-5ab4e6d97769" containerName="mariadb-account-create-update" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.367980 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="602175a2-25e6-472d-b423-5ab4e6d97769" containerName="mariadb-account-create-update" Jan 30 05:25:25 crc kubenswrapper[4931]: E0130 05:25:25.368013 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" containerName="ovn-config" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.368020 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" containerName="ovn-config" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.368153 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" containerName="ovn-config" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.368175 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="602175a2-25e6-472d-b423-5ab4e6d97769" containerName="mariadb-account-create-update" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.368644 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.371027 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.381037 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r9xdc"] Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.444589 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj824\" (UniqueName: \"kubernetes.io/projected/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-kube-api-access-cj824\") pod \"root-account-create-update-r9xdc\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.444766 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-operator-scripts\") pod \"root-account-create-update-r9xdc\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.464211 4931 generic.go:334] "Generic (PLEG): container finished" podID="08c65b18-0526-4eec-a608-20478c5eb008" containerID="342d9a5530d63ffbf73ce7c8e40d07fb36a0b7d11a82e475df103d0dfda95398" exitCode=0 Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.465296 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b" path="/var/lib/kubelet/pods/063f4bc7-8b66-4d8d-b8e2-cf8c713f6a2b/volumes" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.465875 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxb94" event={"ID":"08c65b18-0526-4eec-a608-20478c5eb008","Type":"ContainerDied","Data":"342d9a5530d63ffbf73ce7c8e40d07fb36a0b7d11a82e475df103d0dfda95398"} Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.483926 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerStarted","Data":"7960131bc61ab6450751b905e24e2ccae8d9fe2d400984f5011874b3859c6719"} Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.552520 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-operator-scripts\") pod \"root-account-create-update-r9xdc\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.552692 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj824\" (UniqueName: \"kubernetes.io/projected/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-kube-api-access-cj824\") pod \"root-account-create-update-r9xdc\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.553638 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-operator-scripts\") pod \"root-account-create-update-r9xdc\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.572764 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj824\" (UniqueName: \"kubernetes.io/projected/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-kube-api-access-cj824\") pod \"root-account-create-update-r9xdc\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.596006 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=25.717109971 podStartE2EDuration="30.595984539s" podCreationTimestamp="2026-01-30 05:24:55 +0000 UTC" firstStartedPulling="2026-01-30 05:25:18.425397071 +0000 UTC m=+1053.795307328" lastFinishedPulling="2026-01-30 05:25:23.304271639 +0000 UTC m=+1058.674181896" observedRunningTime="2026-01-30 05:25:25.590735148 +0000 UTC m=+1060.960645405" watchObservedRunningTime="2026-01-30 05:25:25.595984539 +0000 UTC m=+1060.965894796" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.749507 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.937071 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-z2kmr"] Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.938307 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.942895 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.960625 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.960711 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.960776 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.960828 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-config\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.960858 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktd6m\" (UniqueName: \"kubernetes.io/projected/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-kube-api-access-ktd6m\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.960896 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:25 crc kubenswrapper[4931]: I0130 05:25:25.974263 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-z2kmr"] Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.061925 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.061990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.062028 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-config\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.062046 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktd6m\" (UniqueName: \"kubernetes.io/projected/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-kube-api-access-ktd6m\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.062072 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.062119 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.062860 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.063398 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-config\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.063499 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.064597 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.065463 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.089684 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktd6m\" (UniqueName: \"kubernetes.io/projected/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-kube-api-access-ktd6m\") pod \"dnsmasq-dns-8467b54bcc-z2kmr\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.198106 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r9xdc"] Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.264815 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.492404 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9xdc" event={"ID":"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4","Type":"ContainerStarted","Data":"cf97da0cb0eda4f19afadca8bff99c148f7d4875216c60c7fab1145ddf6c9ea3"} Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.492674 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9xdc" event={"ID":"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4","Type":"ContainerStarted","Data":"b215b0a315c6bcf28b690ce191fa2523a30b8bde34ca9d45018d198fcb7ee9fe"} Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.515028 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-r9xdc" podStartSLOduration=1.515007708 podStartE2EDuration="1.515007708s" podCreationTimestamp="2026-01-30 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:26.512142415 +0000 UTC m=+1061.882052682" watchObservedRunningTime="2026-01-30 05:25:26.515007708 +0000 UTC m=+1061.884917985" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.727183 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-z2kmr"] Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.950237 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.974882 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-db-sync-config-data\") pod \"08c65b18-0526-4eec-a608-20478c5eb008\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.974953 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpl8h\" (UniqueName: \"kubernetes.io/projected/08c65b18-0526-4eec-a608-20478c5eb008-kube-api-access-xpl8h\") pod \"08c65b18-0526-4eec-a608-20478c5eb008\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.975035 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-combined-ca-bundle\") pod \"08c65b18-0526-4eec-a608-20478c5eb008\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.975068 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-config-data\") pod \"08c65b18-0526-4eec-a608-20478c5eb008\" (UID: \"08c65b18-0526-4eec-a608-20478c5eb008\") " Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.978714 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c65b18-0526-4eec-a608-20478c5eb008-kube-api-access-xpl8h" (OuterVolumeSpecName: "kube-api-access-xpl8h") pod "08c65b18-0526-4eec-a608-20478c5eb008" (UID: "08c65b18-0526-4eec-a608-20478c5eb008"). InnerVolumeSpecName "kube-api-access-xpl8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:26 crc kubenswrapper[4931]: I0130 05:25:26.982568 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "08c65b18-0526-4eec-a608-20478c5eb008" (UID: "08c65b18-0526-4eec-a608-20478c5eb008"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.009885 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08c65b18-0526-4eec-a608-20478c5eb008" (UID: "08c65b18-0526-4eec-a608-20478c5eb008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.023336 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-config-data" (OuterVolumeSpecName: "config-data") pod "08c65b18-0526-4eec-a608-20478c5eb008" (UID: "08c65b18-0526-4eec-a608-20478c5eb008"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.076964 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.076998 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.077007 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c65b18-0526-4eec-a608-20478c5eb008-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.077019 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpl8h\" (UniqueName: \"kubernetes.io/projected/08c65b18-0526-4eec-a608-20478c5eb008-kube-api-access-xpl8h\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.505814 4931 generic.go:334] "Generic (PLEG): container finished" podID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerID="88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346" exitCode=0 Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.505857 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" event={"ID":"0c2b2206-fcd5-432f-82a7-20e22cd3ceef","Type":"ContainerDied","Data":"88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346"} Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.505902 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" event={"ID":"0c2b2206-fcd5-432f-82a7-20e22cd3ceef","Type":"ContainerStarted","Data":"9d954a97d7ab108beb1f87cd63eb9168552d1563db4086227881a73279ff0b7b"} Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.508621 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wxb94" event={"ID":"08c65b18-0526-4eec-a608-20478c5eb008","Type":"ContainerDied","Data":"1dde39fd71deaa1577ea5797017a140ffe24ad73bc61b3566927cd1bee60c4f1"} Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.508666 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dde39fd71deaa1577ea5797017a140ffe24ad73bc61b3566927cd1bee60c4f1" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.508680 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wxb94" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.510925 4931 generic.go:334] "Generic (PLEG): container finished" podID="6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4" containerID="cf97da0cb0eda4f19afadca8bff99c148f7d4875216c60c7fab1145ddf6c9ea3" exitCode=0 Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.510954 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9xdc" event={"ID":"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4","Type":"ContainerDied","Data":"cf97da0cb0eda4f19afadca8bff99c148f7d4875216c60c7fab1145ddf6c9ea3"} Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.961671 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-z2kmr"] Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.979478 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-8c6pt"] Jan 30 05:25:27 crc kubenswrapper[4931]: E0130 05:25:27.979787 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c65b18-0526-4eec-a608-20478c5eb008" containerName="glance-db-sync" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.979804 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c65b18-0526-4eec-a608-20478c5eb008" containerName="glance-db-sync" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.979943 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c65b18-0526-4eec-a608-20478c5eb008" containerName="glance-db-sync" Jan 30 05:25:27 crc kubenswrapper[4931]: I0130 05:25:27.980712 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.008951 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-8c6pt"] Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.096651 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.096710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p52jc\" (UniqueName: \"kubernetes.io/projected/be176172-3d0c-47ae-aa98-d7ee20022f44-kube-api-access-p52jc\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.096730 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.096768 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.096824 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.096858 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-config\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.197876 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-config\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.198141 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.198175 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p52jc\" (UniqueName: \"kubernetes.io/projected/be176172-3d0c-47ae-aa98-d7ee20022f44-kube-api-access-p52jc\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.198191 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.198226 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.198277 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.199091 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.199591 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-config\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.200079 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.200888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.201515 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.217822 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p52jc\" (UniqueName: \"kubernetes.io/projected/be176172-3d0c-47ae-aa98-d7ee20022f44-kube-api-access-p52jc\") pod \"dnsmasq-dns-56c9bc6f5c-8c6pt\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.298368 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.520954 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" event={"ID":"0c2b2206-fcd5-432f-82a7-20e22cd3ceef","Type":"ContainerStarted","Data":"a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7"} Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.521194 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.543611 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" podStartSLOduration=3.543592508 podStartE2EDuration="3.543592508s" podCreationTimestamp="2026-01-30 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:28.537004689 +0000 UTC m=+1063.906914966" watchObservedRunningTime="2026-01-30 05:25:28.543592508 +0000 UTC m=+1063.913502755" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.571922 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-8c6pt"] Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.891614 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.914281 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj824\" (UniqueName: \"kubernetes.io/projected/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-kube-api-access-cj824\") pod \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.914405 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-operator-scripts\") pod \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\" (UID: \"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4\") " Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.914949 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4" (UID: "6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:28 crc kubenswrapper[4931]: I0130 05:25:28.928086 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-kube-api-access-cj824" (OuterVolumeSpecName: "kube-api-access-cj824") pod "6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4" (UID: "6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4"). InnerVolumeSpecName "kube-api-access-cj824". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.016457 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj824\" (UniqueName: \"kubernetes.io/projected/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-kube-api-access-cj824\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.016716 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.530118 4931 generic.go:334] "Generic (PLEG): container finished" podID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerID="aa6a0d8cd249f8b0104844bcd59d7c80f0ef6c784ec9f9d65e07215bbb280738" exitCode=0 Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.530166 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" event={"ID":"be176172-3d0c-47ae-aa98-d7ee20022f44","Type":"ContainerDied","Data":"aa6a0d8cd249f8b0104844bcd59d7c80f0ef6c784ec9f9d65e07215bbb280738"} Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.530235 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" event={"ID":"be176172-3d0c-47ae-aa98-d7ee20022f44","Type":"ContainerStarted","Data":"af809bcfb9bcd948f444820cb7e724048ff5c243bf6772c74d31c5eab0630ea9"} Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.531490 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9xdc" Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.532014 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9xdc" event={"ID":"6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4","Type":"ContainerDied","Data":"b215b0a315c6bcf28b690ce191fa2523a30b8bde34ca9d45018d198fcb7ee9fe"} Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.532035 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b215b0a315c6bcf28b690ce191fa2523a30b8bde34ca9d45018d198fcb7ee9fe" Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.532086 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" podUID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerName="dnsmasq-dns" containerID="cri-o://a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7" gracePeriod=10 Jan 30 05:25:29 crc kubenswrapper[4931]: I0130 05:25:29.992107 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.039951 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-nb\") pod \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.040007 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-svc\") pod \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.040059 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktd6m\" (UniqueName: \"kubernetes.io/projected/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-kube-api-access-ktd6m\") pod \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.040128 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-sb\") pod \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.040173 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-config\") pod \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.040217 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-swift-storage-0\") pod \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\" (UID: \"0c2b2206-fcd5-432f-82a7-20e22cd3ceef\") " Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.044753 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-kube-api-access-ktd6m" (OuterVolumeSpecName: "kube-api-access-ktd6m") pod "0c2b2206-fcd5-432f-82a7-20e22cd3ceef" (UID: "0c2b2206-fcd5-432f-82a7-20e22cd3ceef"). InnerVolumeSpecName "kube-api-access-ktd6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.083146 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c2b2206-fcd5-432f-82a7-20e22cd3ceef" (UID: "0c2b2206-fcd5-432f-82a7-20e22cd3ceef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.089495 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c2b2206-fcd5-432f-82a7-20e22cd3ceef" (UID: "0c2b2206-fcd5-432f-82a7-20e22cd3ceef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.090479 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c2b2206-fcd5-432f-82a7-20e22cd3ceef" (UID: "0c2b2206-fcd5-432f-82a7-20e22cd3ceef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.096175 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-config" (OuterVolumeSpecName: "config") pod "0c2b2206-fcd5-432f-82a7-20e22cd3ceef" (UID: "0c2b2206-fcd5-432f-82a7-20e22cd3ceef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.098471 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c2b2206-fcd5-432f-82a7-20e22cd3ceef" (UID: "0c2b2206-fcd5-432f-82a7-20e22cd3ceef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.143382 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.143718 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.143734 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.143748 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktd6m\" (UniqueName: \"kubernetes.io/projected/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-kube-api-access-ktd6m\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.143762 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.143774 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2b2206-fcd5-432f-82a7-20e22cd3ceef-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.329056 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.436467 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.544392 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" event={"ID":"be176172-3d0c-47ae-aa98-d7ee20022f44","Type":"ContainerStarted","Data":"566ade23da173169f793e0bfc68dfb7fc94d967bbb01c1ecaa6d6c7476150a48"} Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.548035 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.552634 4931 generic.go:334] "Generic (PLEG): container finished" podID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerID="a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7" exitCode=0 Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.552665 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" event={"ID":"0c2b2206-fcd5-432f-82a7-20e22cd3ceef","Type":"ContainerDied","Data":"a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7"} Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.552684 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" event={"ID":"0c2b2206-fcd5-432f-82a7-20e22cd3ceef","Type":"ContainerDied","Data":"9d954a97d7ab108beb1f87cd63eb9168552d1563db4086227881a73279ff0b7b"} Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.552702 4931 scope.go:117] "RemoveContainer" containerID="a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.552809 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-z2kmr" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.584903 4931 scope.go:117] "RemoveContainer" containerID="88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.590732 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" podStartSLOduration=3.590719371 podStartE2EDuration="3.590719371s" podCreationTimestamp="2026-01-30 05:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:30.586482359 +0000 UTC m=+1065.956392616" watchObservedRunningTime="2026-01-30 05:25:30.590719371 +0000 UTC m=+1065.960629628" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.606049 4931 scope.go:117] "RemoveContainer" containerID="a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7" Jan 30 05:25:30 crc kubenswrapper[4931]: E0130 05:25:30.610500 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7\": container with ID starting with a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7 not found: ID does not exist" containerID="a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.610541 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7"} err="failed to get container status \"a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7\": rpc error: code = NotFound desc = could not find container \"a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7\": container with ID starting with a37b67a5432034fca03956c6e5e234c7b9a5f80ea311a013b6f1df6284c784f7 not found: ID does not exist" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.610576 4931 scope.go:117] "RemoveContainer" containerID="88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346" Jan 30 05:25:30 crc kubenswrapper[4931]: E0130 05:25:30.613671 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346\": container with ID starting with 88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346 not found: ID does not exist" containerID="88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.613705 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346"} err="failed to get container status \"88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346\": rpc error: code = NotFound desc = could not find container \"88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346\": container with ID starting with 88439359b01e7aed3c2e8e25a07c1a042c6d964234627219eaa5cf8b88027346 not found: ID does not exist" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.620211 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-z2kmr"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.626317 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-z2kmr"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.684937 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-cb5c-account-create-update-7n4vq"] Jan 30 05:25:30 crc kubenswrapper[4931]: E0130 05:25:30.685235 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4" containerName="mariadb-account-create-update" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.685251 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4" containerName="mariadb-account-create-update" Jan 30 05:25:30 crc kubenswrapper[4931]: E0130 05:25:30.685265 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerName="init" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.685271 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerName="init" Jan 30 05:25:30 crc kubenswrapper[4931]: E0130 05:25:30.685285 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerName="dnsmasq-dns" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.685291 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerName="dnsmasq-dns" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.685449 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" containerName="dnsmasq-dns" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.685474 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4" containerName="mariadb-account-create-update" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.685941 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.689331 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.712337 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-cb5c-account-create-update-7n4vq"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.721083 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-wtjbg"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.721983 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.735630 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wtjbg"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.757279 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsmf\" (UniqueName: \"kubernetes.io/projected/d98e6af1-4571-4da7-a6e8-0b54505af47c-kube-api-access-xwsmf\") pod \"barbican-cb5c-account-create-update-7n4vq\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.757339 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-operator-scripts\") pod \"cinder-db-create-wtjbg\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.757398 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn76\" (UniqueName: \"kubernetes.io/projected/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-kube-api-access-nfn76\") pod \"cinder-db-create-wtjbg\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.757519 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d98e6af1-4571-4da7-a6e8-0b54505af47c-operator-scripts\") pod \"barbican-cb5c-account-create-update-7n4vq\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.808758 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4c2nt"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.809663 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.826318 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4c2nt"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.858320 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwsmf\" (UniqueName: \"kubernetes.io/projected/d98e6af1-4571-4da7-a6e8-0b54505af47c-kube-api-access-xwsmf\") pod \"barbican-cb5c-account-create-update-7n4vq\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.858481 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b14699-8089-4af7-b0bd-654a8fda9715-operator-scripts\") pod \"barbican-db-create-4c2nt\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.858563 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-operator-scripts\") pod \"cinder-db-create-wtjbg\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.858769 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfn76\" (UniqueName: \"kubernetes.io/projected/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-kube-api-access-nfn76\") pod \"cinder-db-create-wtjbg\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.858919 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d98e6af1-4571-4da7-a6e8-0b54505af47c-operator-scripts\") pod \"barbican-cb5c-account-create-update-7n4vq\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.858975 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24htz\" (UniqueName: \"kubernetes.io/projected/c3b14699-8089-4af7-b0bd-654a8fda9715-kube-api-access-24htz\") pod \"barbican-db-create-4c2nt\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.859668 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-operator-scripts\") pod \"cinder-db-create-wtjbg\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.859868 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d98e6af1-4571-4da7-a6e8-0b54505af47c-operator-scripts\") pod \"barbican-cb5c-account-create-update-7n4vq\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.885835 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfn76\" (UniqueName: \"kubernetes.io/projected/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-kube-api-access-nfn76\") pod \"cinder-db-create-wtjbg\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.906534 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwsmf\" (UniqueName: \"kubernetes.io/projected/d98e6af1-4571-4da7-a6e8-0b54505af47c-kube-api-access-xwsmf\") pod \"barbican-cb5c-account-create-update-7n4vq\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.944444 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8ee9-account-create-update-sdn4j"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.945724 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.952865 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.960742 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b14699-8089-4af7-b0bd-654a8fda9715-operator-scripts\") pod \"barbican-db-create-4c2nt\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.961551 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b14699-8089-4af7-b0bd-654a8fda9715-operator-scripts\") pod \"barbican-db-create-4c2nt\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.961606 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b78d3f2-c575-4b24-bbb8-c956f61a575d-operator-scripts\") pod \"cinder-8ee9-account-create-update-sdn4j\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.961632 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgs2\" (UniqueName: \"kubernetes.io/projected/3b78d3f2-c575-4b24-bbb8-c956f61a575d-kube-api-access-blgs2\") pod \"cinder-8ee9-account-create-update-sdn4j\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.961606 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8ee9-account-create-update-sdn4j"] Jan 30 05:25:30 crc kubenswrapper[4931]: I0130 05:25:30.961729 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24htz\" (UniqueName: \"kubernetes.io/projected/c3b14699-8089-4af7-b0bd-654a8fda9715-kube-api-access-24htz\") pod \"barbican-db-create-4c2nt\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.001402 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.007057 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24htz\" (UniqueName: \"kubernetes.io/projected/c3b14699-8089-4af7-b0bd-654a8fda9715-kube-api-access-24htz\") pod \"barbican-db-create-4c2nt\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.046749 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.050952 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4gqzx"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.051875 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: W0130 05:25:31.059046 4931 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-llv5h": failed to list *v1.Secret: secrets "keystone-keystone-dockercfg-llv5h" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 05:25:31 crc kubenswrapper[4931]: E0130 05:25:31.059084 4931 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-llv5h\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-keystone-dockercfg-llv5h\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 05:25:31 crc kubenswrapper[4931]: W0130 05:25:31.059121 4931 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: secrets "keystone-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 05:25:31 crc kubenswrapper[4931]: E0130 05:25:31.059132 4931 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 05:25:31 crc kubenswrapper[4931]: W0130 05:25:31.059159 4931 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: secrets "keystone" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 05:25:31 crc kubenswrapper[4931]: E0130 05:25:31.059169 4931 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 05:25:31 crc kubenswrapper[4931]: W0130 05:25:31.062718 4931 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: secrets "keystone-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 05:25:31 crc kubenswrapper[4931]: E0130 05:25:31.062745 4931 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.063558 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.063627 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-combined-ca-bundle\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.063725 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b78d3f2-c575-4b24-bbb8-c956f61a575d-operator-scripts\") pod \"cinder-8ee9-account-create-update-sdn4j\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.063749 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgs2\" (UniqueName: \"kubernetes.io/projected/3b78d3f2-c575-4b24-bbb8-c956f61a575d-kube-api-access-blgs2\") pod \"cinder-8ee9-account-create-update-sdn4j\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.063775 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgmx\" (UniqueName: \"kubernetes.io/projected/9d923658-472c-4565-bae3-5eb1e329a92c-kube-api-access-6lgmx\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.064469 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b78d3f2-c575-4b24-bbb8-c956f61a575d-operator-scripts\") pod \"cinder-8ee9-account-create-update-sdn4j\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.108178 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4gqzx"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.110129 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgs2\" (UniqueName: \"kubernetes.io/projected/3b78d3f2-c575-4b24-bbb8-c956f61a575d-kube-api-access-blgs2\") pod \"cinder-8ee9-account-create-update-sdn4j\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.151199 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-557f-account-create-update-6vjq5"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.152108 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: W0130 05:25:31.158206 4931 reflector.go:561] object-"openstack"/"neutron-db-secret": failed to list *v1.Secret: secrets "neutron-db-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 30 05:25:31 crc kubenswrapper[4931]: E0130 05:25:31.158239 4931 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-db-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"neutron-db-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.164769 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-operator-scripts\") pod \"neutron-557f-account-create-update-6vjq5\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.164817 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lgmx\" (UniqueName: \"kubernetes.io/projected/9d923658-472c-4565-bae3-5eb1e329a92c-kube-api-access-6lgmx\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.164838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.164869 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2w74\" (UniqueName: \"kubernetes.io/projected/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-kube-api-access-q2w74\") pod \"neutron-557f-account-create-update-6vjq5\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.164898 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-combined-ca-bundle\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.166117 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.200173 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-combined-ca-bundle\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.200342 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-557f-account-create-update-6vjq5"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.212649 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lgmx\" (UniqueName: \"kubernetes.io/projected/9d923658-472c-4565-bae3-5eb1e329a92c-kube-api-access-6lgmx\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.230083 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cwv94"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.231163 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.262652 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cwv94"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.267587 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2w74\" (UniqueName: \"kubernetes.io/projected/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-kube-api-access-q2w74\") pod \"neutron-557f-account-create-update-6vjq5\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.267839 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-operator-scripts\") pod \"neutron-557f-account-create-update-6vjq5\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.268609 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-operator-scripts\") pod \"neutron-557f-account-create-update-6vjq5\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.302696 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.344020 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2w74\" (UniqueName: \"kubernetes.io/projected/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-kube-api-access-q2w74\") pod \"neutron-557f-account-create-update-6vjq5\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.372363 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnxt\" (UniqueName: \"kubernetes.io/projected/2c29ace9-3be7-44a1-b8eb-d356a4721152-kube-api-access-pnnxt\") pod \"neutron-db-create-cwv94\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.372537 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c29ace9-3be7-44a1-b8eb-d356a4721152-operator-scripts\") pod \"neutron-db-create-cwv94\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.475010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnxt\" (UniqueName: \"kubernetes.io/projected/2c29ace9-3be7-44a1-b8eb-d356a4721152-kube-api-access-pnnxt\") pod \"neutron-db-create-cwv94\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.475111 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c29ace9-3be7-44a1-b8eb-d356a4721152-operator-scripts\") pod \"neutron-db-create-cwv94\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.475746 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c29ace9-3be7-44a1-b8eb-d356a4721152-operator-scripts\") pod \"neutron-db-create-cwv94\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.480156 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2b2206-fcd5-432f-82a7-20e22cd3ceef" path="/var/lib/kubelet/pods/0c2b2206-fcd5-432f-82a7-20e22cd3ceef/volumes" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.480876 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-cb5c-account-create-update-7n4vq"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.491077 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.505672 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-wtjbg"] Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.514035 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnxt\" (UniqueName: \"kubernetes.io/projected/2c29ace9-3be7-44a1-b8eb-d356a4721152-kube-api-access-pnnxt\") pod \"neutron-db-create-cwv94\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.566876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.570291 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wtjbg" event={"ID":"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0","Type":"ContainerStarted","Data":"612d6371cb07082ef0429b90b0d863ecd1c92b6f45ac7168b52671d88d7ef25d"} Jan 30 05:25:31 crc kubenswrapper[4931]: I0130 05:25:31.584943 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cb5c-account-create-update-7n4vq" event={"ID":"d98e6af1-4571-4da7-a6e8-0b54505af47c","Type":"ContainerStarted","Data":"aa1bc5770dea471d374a2f149f70de91cf907c08892cfe93a195b132fc8e0d87"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:31.880133 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4c2nt"] Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:31.882397 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:31.959107 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8ee9-account-create-update-sdn4j"] Jan 30 05:25:35 crc kubenswrapper[4931]: W0130 05:25:31.965322 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b78d3f2_c575_4b24_bbb8_c956f61a575d.slice/crio-64352ea50cbd49b7e446033adc71bd1636d6e240e488199b49931a95d2cff069 WatchSource:0}: Error finding container 64352ea50cbd49b7e446033adc71bd1636d6e240e488199b49931a95d2cff069: Status 404 returned error can't find the container with id 64352ea50cbd49b7e446033adc71bd1636d6e240e488199b49931a95d2cff069 Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:31.999885 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r9xdc"] Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.000306 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.007056 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r9xdc"] Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.032759 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-557f-account-create-update-6vjq5"] Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.138211 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llv5h" Jan 30 05:25:35 crc kubenswrapper[4931]: E0130 05:25:32.166876 4931 secret.go:188] Couldn't get secret openstack/keystone-config-data: failed to sync secret cache: timed out waiting for the condition Jan 30 05:25:35 crc kubenswrapper[4931]: E0130 05:25:32.167042 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data podName:9d923658-472c-4565-bae3-5eb1e329a92c nodeName:}" failed. No retries permitted until 2026-01-30 05:25:32.667013539 +0000 UTC m=+1068.036923796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data") pod "keystone-db-sync-4gqzx" (UID: "9d923658-472c-4565-bae3-5eb1e329a92c") : failed to sync secret cache: timed out waiting for the condition Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.367148 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.392408 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.591787 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-557f-account-create-update-6vjq5" event={"ID":"df6b82f5-5c39-4101-b9f8-05aaf9547a0b","Type":"ContainerStarted","Data":"aa23f5e073a3cef33987241913fa85543847130828c056bd851165f88aec0d30"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.592930 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ee9-account-create-update-sdn4j" event={"ID":"3b78d3f2-c575-4b24-bbb8-c956f61a575d","Type":"ContainerStarted","Data":"64352ea50cbd49b7e446033adc71bd1636d6e240e488199b49931a95d2cff069"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.594152 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4c2nt" event={"ID":"c3b14699-8089-4af7-b0bd-654a8fda9715","Type":"ContainerStarted","Data":"c98a171eb3b4b2a4cc6684d3ea0e312812cf19d06e872769140d149acb612bf9"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.595511 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wtjbg" event={"ID":"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0","Type":"ContainerStarted","Data":"15692c1b35f8b38884128c50d64f7fe3e0155bf28a22d7aceb44dcecc3b74210"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.597033 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cb5c-account-create-update-7n4vq" event={"ID":"d98e6af1-4571-4da7-a6e8-0b54505af47c","Type":"ContainerStarted","Data":"98322602699d7d942a6271f0a7fc74a73af1b5a299d4d538e2ee24bc7375a406"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.612895 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-wtjbg" podStartSLOduration=2.612873856 podStartE2EDuration="2.612873856s" podCreationTimestamp="2026-01-30 05:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:32.606790591 +0000 UTC m=+1067.976700848" watchObservedRunningTime="2026-01-30 05:25:32.612873856 +0000 UTC m=+1067.982784113" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.627849 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-cb5c-account-create-update-7n4vq" podStartSLOduration=2.627831146 podStartE2EDuration="2.627831146s" podCreationTimestamp="2026-01-30 05:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:32.624955754 +0000 UTC m=+1067.994866011" watchObservedRunningTime="2026-01-30 05:25:32.627831146 +0000 UTC m=+1067.997741403" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.700319 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.706605 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data\") pod \"keystone-db-sync-4gqzx\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:32.911801 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:33.432608 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4" path="/var/lib/kubelet/pods/6cbb8ac5-f347-4f13-9c58-ecc57a62dfb4/volumes" Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:33.626906 4931 generic.go:334] "Generic (PLEG): container finished" podID="d98e6af1-4571-4da7-a6e8-0b54505af47c" containerID="98322602699d7d942a6271f0a7fc74a73af1b5a299d4d538e2ee24bc7375a406" exitCode=0 Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:33.626988 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cb5c-account-create-update-7n4vq" event={"ID":"d98e6af1-4571-4da7-a6e8-0b54505af47c","Type":"ContainerDied","Data":"98322602699d7d942a6271f0a7fc74a73af1b5a299d4d538e2ee24bc7375a406"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:33.636230 4931 generic.go:334] "Generic (PLEG): container finished" podID="e65373ae-84e0-4338-be4c-8cc8bd2d3fb0" containerID="15692c1b35f8b38884128c50d64f7fe3e0155bf28a22d7aceb44dcecc3b74210" exitCode=0 Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:33.636343 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wtjbg" event={"ID":"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0","Type":"ContainerDied","Data":"15692c1b35f8b38884128c50d64f7fe3e0155bf28a22d7aceb44dcecc3b74210"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:35.657209 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ee9-account-create-update-sdn4j" event={"ID":"3b78d3f2-c575-4b24-bbb8-c956f61a575d","Type":"ContainerStarted","Data":"2f4a9744870428aceb547b2acdf130704ce1aaa6370e1105462da3c72da4e168"} Jan 30 05:25:35 crc kubenswrapper[4931]: I0130 05:25:35.663035 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4c2nt" event={"ID":"c3b14699-8089-4af7-b0bd-654a8fda9715","Type":"ContainerStarted","Data":"1a58b66910dccd0a3d3aecf3a69cc3be05007daec35ed4f4da6ecaf7deb3050f"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.086438 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cwv94"] Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.192738 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4gqzx"] Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.304632 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.309181 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.459060 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwsmf\" (UniqueName: \"kubernetes.io/projected/d98e6af1-4571-4da7-a6e8-0b54505af47c-kube-api-access-xwsmf\") pod \"d98e6af1-4571-4da7-a6e8-0b54505af47c\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.459317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfn76\" (UniqueName: \"kubernetes.io/projected/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-kube-api-access-nfn76\") pod \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.459436 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-operator-scripts\") pod \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\" (UID: \"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0\") " Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.459491 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d98e6af1-4571-4da7-a6e8-0b54505af47c-operator-scripts\") pod \"d98e6af1-4571-4da7-a6e8-0b54505af47c\" (UID: \"d98e6af1-4571-4da7-a6e8-0b54505af47c\") " Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.460169 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e65373ae-84e0-4338-be4c-8cc8bd2d3fb0" (UID: "e65373ae-84e0-4338-be4c-8cc8bd2d3fb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.460439 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d98e6af1-4571-4da7-a6e8-0b54505af47c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d98e6af1-4571-4da7-a6e8-0b54505af47c" (UID: "d98e6af1-4571-4da7-a6e8-0b54505af47c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.465150 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-kube-api-access-nfn76" (OuterVolumeSpecName: "kube-api-access-nfn76") pod "e65373ae-84e0-4338-be4c-8cc8bd2d3fb0" (UID: "e65373ae-84e0-4338-be4c-8cc8bd2d3fb0"). InnerVolumeSpecName "kube-api-access-nfn76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.465301 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98e6af1-4571-4da7-a6e8-0b54505af47c-kube-api-access-xwsmf" (OuterVolumeSpecName: "kube-api-access-xwsmf") pod "d98e6af1-4571-4da7-a6e8-0b54505af47c" (UID: "d98e6af1-4571-4da7-a6e8-0b54505af47c"). InnerVolumeSpecName "kube-api-access-xwsmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.561773 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwsmf\" (UniqueName: \"kubernetes.io/projected/d98e6af1-4571-4da7-a6e8-0b54505af47c-kube-api-access-xwsmf\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.561815 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfn76\" (UniqueName: \"kubernetes.io/projected/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-kube-api-access-nfn76\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.561827 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.561840 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d98e6af1-4571-4da7-a6e8-0b54505af47c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.675379 4931 generic.go:334] "Generic (PLEG): container finished" podID="2c29ace9-3be7-44a1-b8eb-d356a4721152" containerID="f8a2b41856adf7471c684772afc9b12f445fbd24f6ab5036ce18fde6331c17d4" exitCode=0 Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.675472 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cwv94" event={"ID":"2c29ace9-3be7-44a1-b8eb-d356a4721152","Type":"ContainerDied","Data":"f8a2b41856adf7471c684772afc9b12f445fbd24f6ab5036ce18fde6331c17d4"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.675550 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cwv94" event={"ID":"2c29ace9-3be7-44a1-b8eb-d356a4721152","Type":"ContainerStarted","Data":"5db11c59dc0ea93fac43524325f66f48d5401cc5cc845a375c0bc2d6e3288c9e"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.678562 4931 generic.go:334] "Generic (PLEG): container finished" podID="3b78d3f2-c575-4b24-bbb8-c956f61a575d" containerID="2f4a9744870428aceb547b2acdf130704ce1aaa6370e1105462da3c72da4e168" exitCode=0 Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.678739 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ee9-account-create-update-sdn4j" event={"ID":"3b78d3f2-c575-4b24-bbb8-c956f61a575d","Type":"ContainerDied","Data":"2f4a9744870428aceb547b2acdf130704ce1aaa6370e1105462da3c72da4e168"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.680577 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3b14699-8089-4af7-b0bd-654a8fda9715" containerID="1a58b66910dccd0a3d3aecf3a69cc3be05007daec35ed4f4da6ecaf7deb3050f" exitCode=0 Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.680674 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4c2nt" event={"ID":"c3b14699-8089-4af7-b0bd-654a8fda9715","Type":"ContainerDied","Data":"1a58b66910dccd0a3d3aecf3a69cc3be05007daec35ed4f4da6ecaf7deb3050f"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.682400 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-wtjbg" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.682407 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-wtjbg" event={"ID":"e65373ae-84e0-4338-be4c-8cc8bd2d3fb0","Type":"ContainerDied","Data":"612d6371cb07082ef0429b90b0d863ecd1c92b6f45ac7168b52671d88d7ef25d"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.682909 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="612d6371cb07082ef0429b90b0d863ecd1c92b6f45ac7168b52671d88d7ef25d" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.683559 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gqzx" event={"ID":"9d923658-472c-4565-bae3-5eb1e329a92c","Type":"ContainerStarted","Data":"685d0bdee5d1a947f16ef0e29880c274ee3422ec09c45cb58fd08bec46c96278"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.685085 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-7n4vq" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.685051 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cb5c-account-create-update-7n4vq" event={"ID":"d98e6af1-4571-4da7-a6e8-0b54505af47c","Type":"ContainerDied","Data":"aa1bc5770dea471d374a2f149f70de91cf907c08892cfe93a195b132fc8e0d87"} Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.685138 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1bc5770dea471d374a2f149f70de91cf907c08892cfe93a195b132fc8e0d87" Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.686876 4931 generic.go:334] "Generic (PLEG): container finished" podID="df6b82f5-5c39-4101-b9f8-05aaf9547a0b" containerID="94fc6d9869d9820d8c965d9ddc61b4a6003c2bcfb528dd4f82ab1c383ce5be01" exitCode=0 Jan 30 05:25:36 crc kubenswrapper[4931]: I0130 05:25:36.687016 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-557f-account-create-update-6vjq5" event={"ID":"df6b82f5-5c39-4101-b9f8-05aaf9547a0b","Type":"ContainerDied","Data":"94fc6d9869d9820d8c965d9ddc61b4a6003c2bcfb528dd4f82ab1c383ce5be01"} Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.002093 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p975f"] Jan 30 05:25:37 crc kubenswrapper[4931]: E0130 05:25:37.003326 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98e6af1-4571-4da7-a6e8-0b54505af47c" containerName="mariadb-account-create-update" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.003370 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98e6af1-4571-4da7-a6e8-0b54505af47c" containerName="mariadb-account-create-update" Jan 30 05:25:37 crc kubenswrapper[4931]: E0130 05:25:37.003393 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65373ae-84e0-4338-be4c-8cc8bd2d3fb0" containerName="mariadb-database-create" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.003402 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65373ae-84e0-4338-be4c-8cc8bd2d3fb0" containerName="mariadb-database-create" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.004170 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65373ae-84e0-4338-be4c-8cc8bd2d3fb0" containerName="mariadb-database-create" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.004207 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98e6af1-4571-4da7-a6e8-0b54505af47c" containerName="mariadb-account-create-update" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.006471 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.009535 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.011863 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p975f"] Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.175770 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd612f9b-4de8-48e4-a945-c97e5c495292-operator-scripts\") pod \"root-account-create-update-p975f\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.175952 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv7l4\" (UniqueName: \"kubernetes.io/projected/dd612f9b-4de8-48e4-a945-c97e5c495292-kube-api-access-tv7l4\") pod \"root-account-create-update-p975f\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.277497 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv7l4\" (UniqueName: \"kubernetes.io/projected/dd612f9b-4de8-48e4-a945-c97e5c495292-kube-api-access-tv7l4\") pod \"root-account-create-update-p975f\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.277577 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd612f9b-4de8-48e4-a945-c97e5c495292-operator-scripts\") pod \"root-account-create-update-p975f\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.278390 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd612f9b-4de8-48e4-a945-c97e5c495292-operator-scripts\") pod \"root-account-create-update-p975f\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.300681 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv7l4\" (UniqueName: \"kubernetes.io/projected/dd612f9b-4de8-48e4-a945-c97e5c495292-kube-api-access-tv7l4\") pod \"root-account-create-update-p975f\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.331924 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p975f" Jan 30 05:25:37 crc kubenswrapper[4931]: I0130 05:25:37.798382 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p975f"] Jan 30 05:25:37 crc kubenswrapper[4931]: W0130 05:25:37.805724 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd612f9b_4de8_48e4_a945_c97e5c495292.slice/crio-c42924e228e0faa34852385f08124501fd35f78817571e20aa497d3a1537a563 WatchSource:0}: Error finding container c42924e228e0faa34852385f08124501fd35f78817571e20aa497d3a1537a563: Status 404 returned error can't find the container with id c42924e228e0faa34852385f08124501fd35f78817571e20aa497d3a1537a563 Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.006027 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.096990 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2w74\" (UniqueName: \"kubernetes.io/projected/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-kube-api-access-q2w74\") pod \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.097171 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-operator-scripts\") pod \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\" (UID: \"df6b82f5-5c39-4101-b9f8-05aaf9547a0b\") " Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.098081 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df6b82f5-5c39-4101-b9f8-05aaf9547a0b" (UID: "df6b82f5-5c39-4101-b9f8-05aaf9547a0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.105634 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-kube-api-access-q2w74" (OuterVolumeSpecName: "kube-api-access-q2w74") pod "df6b82f5-5c39-4101-b9f8-05aaf9547a0b" (UID: "df6b82f5-5c39-4101-b9f8-05aaf9547a0b"). InnerVolumeSpecName "kube-api-access-q2w74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.198808 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2w74\" (UniqueName: \"kubernetes.io/projected/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-kube-api-access-q2w74\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.198847 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df6b82f5-5c39-4101-b9f8-05aaf9547a0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.300642 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.376305 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-5hp4b"] Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.376595 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerName="dnsmasq-dns" containerID="cri-o://48a476fc993377e732a41ec538be9f206289fff472ea517e4a1e5122eac3f5e4" gracePeriod=10 Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.704659 4931 generic.go:334] "Generic (PLEG): container finished" podID="dd612f9b-4de8-48e4-a945-c97e5c495292" containerID="e65d7d2b5f976da6a48bf573c615d7b8b7b4da4391bf0bdecb5b42aeee5717fb" exitCode=0 Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.704719 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p975f" event={"ID":"dd612f9b-4de8-48e4-a945-c97e5c495292","Type":"ContainerDied","Data":"e65d7d2b5f976da6a48bf573c615d7b8b7b4da4391bf0bdecb5b42aeee5717fb"} Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.704744 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p975f" event={"ID":"dd612f9b-4de8-48e4-a945-c97e5c495292","Type":"ContainerStarted","Data":"c42924e228e0faa34852385f08124501fd35f78817571e20aa497d3a1537a563"} Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.711255 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-557f-account-create-update-6vjq5" event={"ID":"df6b82f5-5c39-4101-b9f8-05aaf9547a0b","Type":"ContainerDied","Data":"aa23f5e073a3cef33987241913fa85543847130828c056bd851165f88aec0d30"} Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.711281 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa23f5e073a3cef33987241913fa85543847130828c056bd851165f88aec0d30" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.711465 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-557f-account-create-update-6vjq5" Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.715935 4931 generic.go:334] "Generic (PLEG): container finished" podID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerID="48a476fc993377e732a41ec538be9f206289fff472ea517e4a1e5122eac3f5e4" exitCode=0 Jan 30 05:25:38 crc kubenswrapper[4931]: I0130 05:25:38.716039 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" event={"ID":"eb20eb5e-4f22-4088-98dc-44eaf5ac5958","Type":"ContainerDied","Data":"48a476fc993377e732a41ec538be9f206289fff472ea517e4a1e5122eac3f5e4"} Jan 30 05:25:40 crc kubenswrapper[4931]: I0130 05:25:40.591157 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.180277 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p975f" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.208013 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.219810 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.248150 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.252272 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd612f9b-4de8-48e4-a945-c97e5c495292-operator-scripts\") pod \"dd612f9b-4de8-48e4-a945-c97e5c495292\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.252367 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv7l4\" (UniqueName: \"kubernetes.io/projected/dd612f9b-4de8-48e4-a945-c97e5c495292-kube-api-access-tv7l4\") pod \"dd612f9b-4de8-48e4-a945-c97e5c495292\" (UID: \"dd612f9b-4de8-48e4-a945-c97e5c495292\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.253678 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd612f9b-4de8-48e4-a945-c97e5c495292-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd612f9b-4de8-48e4-a945-c97e5c495292" (UID: "dd612f9b-4de8-48e4-a945-c97e5c495292"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.262216 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd612f9b-4de8-48e4-a945-c97e5c495292-kube-api-access-tv7l4" (OuterVolumeSpecName: "kube-api-access-tv7l4") pod "dd612f9b-4de8-48e4-a945-c97e5c495292" (UID: "dd612f9b-4de8-48e4-a945-c97e5c495292"). InnerVolumeSpecName "kube-api-access-tv7l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.302123 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.353313 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b14699-8089-4af7-b0bd-654a8fda9715-operator-scripts\") pod \"c3b14699-8089-4af7-b0bd-654a8fda9715\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.353596 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c29ace9-3be7-44a1-b8eb-d356a4721152-operator-scripts\") pod \"2c29ace9-3be7-44a1-b8eb-d356a4721152\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.353664 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blgs2\" (UniqueName: \"kubernetes.io/projected/3b78d3f2-c575-4b24-bbb8-c956f61a575d-kube-api-access-blgs2\") pod \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.353771 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3b14699-8089-4af7-b0bd-654a8fda9715-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3b14699-8089-4af7-b0bd-654a8fda9715" (UID: "c3b14699-8089-4af7-b0bd-654a8fda9715"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.353790 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24htz\" (UniqueName: \"kubernetes.io/projected/c3b14699-8089-4af7-b0bd-654a8fda9715-kube-api-access-24htz\") pod \"c3b14699-8089-4af7-b0bd-654a8fda9715\" (UID: \"c3b14699-8089-4af7-b0bd-654a8fda9715\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.353865 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b78d3f2-c575-4b24-bbb8-c956f61a575d-operator-scripts\") pod \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\" (UID: \"3b78d3f2-c575-4b24-bbb8-c956f61a575d\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.353888 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnnxt\" (UniqueName: \"kubernetes.io/projected/2c29ace9-3be7-44a1-b8eb-d356a4721152-kube-api-access-pnnxt\") pod \"2c29ace9-3be7-44a1-b8eb-d356a4721152\" (UID: \"2c29ace9-3be7-44a1-b8eb-d356a4721152\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.354473 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3b14699-8089-4af7-b0bd-654a8fda9715-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.354488 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd612f9b-4de8-48e4-a945-c97e5c495292-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.354498 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv7l4\" (UniqueName: \"kubernetes.io/projected/dd612f9b-4de8-48e4-a945-c97e5c495292-kube-api-access-tv7l4\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.354538 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c29ace9-3be7-44a1-b8eb-d356a4721152-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c29ace9-3be7-44a1-b8eb-d356a4721152" (UID: "2c29ace9-3be7-44a1-b8eb-d356a4721152"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.358127 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b78d3f2-c575-4b24-bbb8-c956f61a575d-kube-api-access-blgs2" (OuterVolumeSpecName: "kube-api-access-blgs2") pod "3b78d3f2-c575-4b24-bbb8-c956f61a575d" (UID: "3b78d3f2-c575-4b24-bbb8-c956f61a575d"). InnerVolumeSpecName "kube-api-access-blgs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.358320 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b14699-8089-4af7-b0bd-654a8fda9715-kube-api-access-24htz" (OuterVolumeSpecName: "kube-api-access-24htz") pod "c3b14699-8089-4af7-b0bd-654a8fda9715" (UID: "c3b14699-8089-4af7-b0bd-654a8fda9715"). InnerVolumeSpecName "kube-api-access-24htz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.359053 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c29ace9-3be7-44a1-b8eb-d356a4721152-kube-api-access-pnnxt" (OuterVolumeSpecName: "kube-api-access-pnnxt") pod "2c29ace9-3be7-44a1-b8eb-d356a4721152" (UID: "2c29ace9-3be7-44a1-b8eb-d356a4721152"). InnerVolumeSpecName "kube-api-access-pnnxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.359624 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b78d3f2-c575-4b24-bbb8-c956f61a575d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b78d3f2-c575-4b24-bbb8-c956f61a575d" (UID: "3b78d3f2-c575-4b24-bbb8-c956f61a575d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.455916 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-nb\") pod \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.455964 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-config\") pod \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.455988 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pjqc\" (UniqueName: \"kubernetes.io/projected/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-kube-api-access-4pjqc\") pod \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.456140 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-sb\") pod \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.456182 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-dns-svc\") pod \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\" (UID: \"eb20eb5e-4f22-4088-98dc-44eaf5ac5958\") " Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.456495 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c29ace9-3be7-44a1-b8eb-d356a4721152-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.456510 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blgs2\" (UniqueName: \"kubernetes.io/projected/3b78d3f2-c575-4b24-bbb8-c956f61a575d-kube-api-access-blgs2\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.456522 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24htz\" (UniqueName: \"kubernetes.io/projected/c3b14699-8089-4af7-b0bd-654a8fda9715-kube-api-access-24htz\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.456533 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b78d3f2-c575-4b24-bbb8-c956f61a575d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.456541 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnnxt\" (UniqueName: \"kubernetes.io/projected/2c29ace9-3be7-44a1-b8eb-d356a4721152-kube-api-access-pnnxt\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.471210 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-kube-api-access-4pjqc" (OuterVolumeSpecName: "kube-api-access-4pjqc") pod "eb20eb5e-4f22-4088-98dc-44eaf5ac5958" (UID: "eb20eb5e-4f22-4088-98dc-44eaf5ac5958"). InnerVolumeSpecName "kube-api-access-4pjqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.497150 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb20eb5e-4f22-4088-98dc-44eaf5ac5958" (UID: "eb20eb5e-4f22-4088-98dc-44eaf5ac5958"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.506845 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb20eb5e-4f22-4088-98dc-44eaf5ac5958" (UID: "eb20eb5e-4f22-4088-98dc-44eaf5ac5958"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.508062 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb20eb5e-4f22-4088-98dc-44eaf5ac5958" (UID: "eb20eb5e-4f22-4088-98dc-44eaf5ac5958"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.510217 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-config" (OuterVolumeSpecName: "config") pod "eb20eb5e-4f22-4088-98dc-44eaf5ac5958" (UID: "eb20eb5e-4f22-4088-98dc-44eaf5ac5958"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.559047 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.559185 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.559356 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.559398 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.559414 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pjqc\" (UniqueName: \"kubernetes.io/projected/eb20eb5e-4f22-4088-98dc-44eaf5ac5958-kube-api-access-4pjqc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.746768 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ee9-account-create-update-sdn4j" event={"ID":"3b78d3f2-c575-4b24-bbb8-c956f61a575d","Type":"ContainerDied","Data":"64352ea50cbd49b7e446033adc71bd1636d6e240e488199b49931a95d2cff069"} Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.746814 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64352ea50cbd49b7e446033adc71bd1636d6e240e488199b49931a95d2cff069" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.746883 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-sdn4j" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.754595 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4c2nt" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.754630 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4c2nt" event={"ID":"c3b14699-8089-4af7-b0bd-654a8fda9715","Type":"ContainerDied","Data":"c98a171eb3b4b2a4cc6684d3ea0e312812cf19d06e872769140d149acb612bf9"} Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.754671 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c98a171eb3b4b2a4cc6684d3ea0e312812cf19d06e872769140d149acb612bf9" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.761039 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gqzx" event={"ID":"9d923658-472c-4565-bae3-5eb1e329a92c","Type":"ContainerStarted","Data":"508e6e3003e86ee32a2b32dcec684271942a70c7d32070551d5605127eb8d9ad"} Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.764557 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" event={"ID":"eb20eb5e-4f22-4088-98dc-44eaf5ac5958","Type":"ContainerDied","Data":"683aec62918f40b319d4b21f6811c3625fb69dac85c6d8a9170f3b1e7160bffa"} Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.764622 4931 scope.go:117] "RemoveContainer" containerID="48a476fc993377e732a41ec538be9f206289fff472ea517e4a1e5122eac3f5e4" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.764791 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-5hp4b" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.768688 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cwv94" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.768882 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cwv94" event={"ID":"2c29ace9-3be7-44a1-b8eb-d356a4721152","Type":"ContainerDied","Data":"5db11c59dc0ea93fac43524325f66f48d5401cc5cc845a375c0bc2d6e3288c9e"} Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.768927 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5db11c59dc0ea93fac43524325f66f48d5401cc5cc845a375c0bc2d6e3288c9e" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.772086 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p975f" event={"ID":"dd612f9b-4de8-48e4-a945-c97e5c495292","Type":"ContainerDied","Data":"c42924e228e0faa34852385f08124501fd35f78817571e20aa497d3a1537a563"} Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.772115 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42924e228e0faa34852385f08124501fd35f78817571e20aa497d3a1537a563" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.772174 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p975f" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.778719 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4gqzx" podStartSLOduration=5.912243733 podStartE2EDuration="10.778706695s" podCreationTimestamp="2026-01-30 05:25:31 +0000 UTC" firstStartedPulling="2026-01-30 05:25:36.199361395 +0000 UTC m=+1071.569271652" lastFinishedPulling="2026-01-30 05:25:41.065824357 +0000 UTC m=+1076.435734614" observedRunningTime="2026-01-30 05:25:41.777153991 +0000 UTC m=+1077.147064268" watchObservedRunningTime="2026-01-30 05:25:41.778706695 +0000 UTC m=+1077.148616962" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.806823 4931 scope.go:117] "RemoveContainer" containerID="5b51c3e6a6e67206beccccc2be017d2e75bb1a8386fa12f6af6b641475f06048" Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.810739 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-5hp4b"] Jan 30 05:25:41 crc kubenswrapper[4931]: I0130 05:25:41.817795 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-5hp4b"] Jan 30 05:25:43 crc kubenswrapper[4931]: I0130 05:25:43.439042 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" path="/var/lib/kubelet/pods/eb20eb5e-4f22-4088-98dc-44eaf5ac5958/volumes" Jan 30 05:25:44 crc kubenswrapper[4931]: I0130 05:25:44.811467 4931 generic.go:334] "Generic (PLEG): container finished" podID="9d923658-472c-4565-bae3-5eb1e329a92c" containerID="508e6e3003e86ee32a2b32dcec684271942a70c7d32070551d5605127eb8d9ad" exitCode=0 Jan 30 05:25:44 crc kubenswrapper[4931]: I0130 05:25:44.811471 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gqzx" event={"ID":"9d923658-472c-4565-bae3-5eb1e329a92c","Type":"ContainerDied","Data":"508e6e3003e86ee32a2b32dcec684271942a70c7d32070551d5605127eb8d9ad"} Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.244045 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.356030 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lgmx\" (UniqueName: \"kubernetes.io/projected/9d923658-472c-4565-bae3-5eb1e329a92c-kube-api-access-6lgmx\") pod \"9d923658-472c-4565-bae3-5eb1e329a92c\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.356118 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-combined-ca-bundle\") pod \"9d923658-472c-4565-bae3-5eb1e329a92c\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.356173 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data\") pod \"9d923658-472c-4565-bae3-5eb1e329a92c\" (UID: \"9d923658-472c-4565-bae3-5eb1e329a92c\") " Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.363653 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d923658-472c-4565-bae3-5eb1e329a92c-kube-api-access-6lgmx" (OuterVolumeSpecName: "kube-api-access-6lgmx") pod "9d923658-472c-4565-bae3-5eb1e329a92c" (UID: "9d923658-472c-4565-bae3-5eb1e329a92c"). InnerVolumeSpecName "kube-api-access-6lgmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.397619 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d923658-472c-4565-bae3-5eb1e329a92c" (UID: "9d923658-472c-4565-bae3-5eb1e329a92c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.399745 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data" (OuterVolumeSpecName: "config-data") pod "9d923658-472c-4565-bae3-5eb1e329a92c" (UID: "9d923658-472c-4565-bae3-5eb1e329a92c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.457753 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lgmx\" (UniqueName: \"kubernetes.io/projected/9d923658-472c-4565-bae3-5eb1e329a92c-kube-api-access-6lgmx\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.457795 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.457806 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d923658-472c-4565-bae3-5eb1e329a92c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.839047 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4gqzx" event={"ID":"9d923658-472c-4565-bae3-5eb1e329a92c","Type":"ContainerDied","Data":"685d0bdee5d1a947f16ef0e29880c274ee3422ec09c45cb58fd08bec46c96278"} Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.839105 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685d0bdee5d1a947f16ef0e29880c274ee3422ec09c45cb58fd08bec46c96278" Jan 30 05:25:46 crc kubenswrapper[4931]: I0130 05:25:46.839116 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4gqzx" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.078381 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-zrpdg"] Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079015 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c29ace9-3be7-44a1-b8eb-d356a4721152" containerName="mariadb-database-create" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079035 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c29ace9-3be7-44a1-b8eb-d356a4721152" containerName="mariadb-database-create" Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079054 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b14699-8089-4af7-b0bd-654a8fda9715" containerName="mariadb-database-create" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079062 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b14699-8089-4af7-b0bd-654a8fda9715" containerName="mariadb-database-create" Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079078 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d923658-472c-4565-bae3-5eb1e329a92c" containerName="keystone-db-sync" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079086 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d923658-472c-4565-bae3-5eb1e329a92c" containerName="keystone-db-sync" Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079105 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd612f9b-4de8-48e4-a945-c97e5c495292" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079114 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd612f9b-4de8-48e4-a945-c97e5c495292" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079130 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b78d3f2-c575-4b24-bbb8-c956f61a575d" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079140 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b78d3f2-c575-4b24-bbb8-c956f61a575d" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079157 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6b82f5-5c39-4101-b9f8-05aaf9547a0b" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079165 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6b82f5-5c39-4101-b9f8-05aaf9547a0b" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079175 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerName="init" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079183 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerName="init" Jan 30 05:25:47 crc kubenswrapper[4931]: E0130 05:25:47.079205 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerName="dnsmasq-dns" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079213 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerName="dnsmasq-dns" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079394 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6b82f5-5c39-4101-b9f8-05aaf9547a0b" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079406 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d923658-472c-4565-bae3-5eb1e329a92c" containerName="keystone-db-sync" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079442 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd612f9b-4de8-48e4-a945-c97e5c495292" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079453 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b78d3f2-c575-4b24-bbb8-c956f61a575d" containerName="mariadb-account-create-update" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079472 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c29ace9-3be7-44a1-b8eb-d356a4721152" containerName="mariadb-database-create" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079485 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b14699-8089-4af7-b0bd-654a8fda9715" containerName="mariadb-database-create" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.079502 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb20eb5e-4f22-4088-98dc-44eaf5ac5958" containerName="dnsmasq-dns" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.080490 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.089046 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-zrpdg"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.149079 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2gl9c"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.150408 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.156472 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.156624 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.156703 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.156815 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.156936 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llv5h" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.161521 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2gl9c"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.170861 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.170942 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.170979 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-config\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.171014 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbtg6\" (UniqueName: \"kubernetes.io/projected/81259525-e98e-4119-9071-2e17b0fb1640-kube-api-access-nbtg6\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.171031 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.171051 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272459 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wwl\" (UniqueName: \"kubernetes.io/projected/8946f758-7352-4859-a3c3-b98bca9b99e4-kube-api-access-p7wwl\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272512 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-fernet-keys\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272551 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272595 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-config\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272631 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbtg6\" (UniqueName: \"kubernetes.io/projected/81259525-e98e-4119-9071-2e17b0fb1640-kube-api-access-nbtg6\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272649 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272667 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-config-data\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272690 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272720 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-credential-keys\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272742 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-combined-ca-bundle\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272757 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.272782 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-scripts\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.273625 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.274154 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-config\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.274887 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.275389 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.275900 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.311484 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbtg6\" (UniqueName: \"kubernetes.io/projected/81259525-e98e-4119-9071-2e17b0fb1640-kube-api-access-nbtg6\") pod \"dnsmasq-dns-54b4bb76d5-zrpdg\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.326143 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.327938 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.333952 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.334109 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.344008 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.374914 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wwl\" (UniqueName: \"kubernetes.io/projected/8946f758-7352-4859-a3c3-b98bca9b99e4-kube-api-access-p7wwl\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.374984 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-fernet-keys\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.375094 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-config-data\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.375141 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-credential-keys\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.375175 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-combined-ca-bundle\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.375209 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-scripts\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.379302 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-scripts\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.379537 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-credential-keys\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.379563 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-config-data\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.381179 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-fernet-keys\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.387365 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-combined-ca-bundle\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.394997 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wwl\" (UniqueName: \"kubernetes.io/projected/8946f758-7352-4859-a3c3-b98bca9b99e4-kube-api-access-p7wwl\") pod \"keystone-bootstrap-2gl9c\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.404910 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ldr24"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.405874 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.407992 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vt49t" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.409045 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.410577 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.439179 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ldr24"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.461487 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kbkmb"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.462488 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.465936 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.466196 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ffrzt" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.468015 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.471279 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.476905 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-combined-ca-bundle\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.476959 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-scripts\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.476994 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-config-data\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.477031 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-log-httpd\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.477059 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-db-sync-config-data\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.477658 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rpr97"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.484484 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.478325 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.484977 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p852h\" (UniqueName: \"kubernetes.io/projected/f3ddcee7-a757-43b5-bf76-552cbd8d9078-kube-api-access-p852h\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.485004 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-run-httpd\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.485095 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl9x6\" (UniqueName: \"kubernetes.io/projected/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-kube-api-access-nl9x6\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.485128 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.486267 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kv6bp" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.486578 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.488593 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.506259 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kbkmb"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.518022 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rpr97"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.578782 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-fkqxj"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.579770 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.586977 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.587289 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.587815 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fkqxj"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.588884 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-config\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.588915 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-db-sync-config-data\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.588955 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-config-data\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.588984 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl9x6\" (UniqueName: \"kubernetes.io/projected/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-kube-api-access-nl9x6\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.589008 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.589130 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvg45\" (UniqueName: \"kubernetes.io/projected/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-kube-api-access-zvg45\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.589150 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-etc-machine-id\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.598760 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-combined-ca-bundle\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.598855 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-scripts\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.598915 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-config-data\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.598963 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-scripts\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599035 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-log-httpd\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599052 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-combined-ca-bundle\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599075 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6jwz\" (UniqueName: \"kubernetes.io/projected/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-kube-api-access-x6jwz\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599110 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-db-sync-config-data\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599172 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-combined-ca-bundle\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599191 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599211 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p852h\" (UniqueName: \"kubernetes.io/projected/f3ddcee7-a757-43b5-bf76-552cbd8d9078-kube-api-access-p852h\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599232 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-run-httpd\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.599690 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-run-httpd\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.611456 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-zrpdg"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.611917 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-log-httpd\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.612481 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-scripts\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.618323 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-db-sync-config-data\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.619347 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl9x6\" (UniqueName: \"kubernetes.io/projected/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-kube-api-access-nl9x6\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.620030 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-combined-ca-bundle\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.621009 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-config-data\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.622787 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.625028 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.631970 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fttzx" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.634398 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-6rw7f"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.635282 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p852h\" (UniqueName: \"kubernetes.io/projected/f3ddcee7-a757-43b5-bf76-552cbd8d9078-kube-api-access-p852h\") pod \"barbican-db-sync-ldr24\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.635674 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.659920 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.694021 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-6rw7f"] Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703697 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-combined-ca-bundle\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703740 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6jwz\" (UniqueName: \"kubernetes.io/projected/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-kube-api-access-x6jwz\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703784 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-combined-ca-bundle\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703814 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-logs\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703831 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-scripts\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703849 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-db-sync-config-data\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-config\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703886 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-config-data\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703927 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-combined-ca-bundle\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703951 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvg45\" (UniqueName: \"kubernetes.io/projected/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-kube-api-access-zvg45\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.703970 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-etc-machine-id\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.704024 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkqq\" (UniqueName: \"kubernetes.io/projected/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-kube-api-access-5lkqq\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.704047 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-config-data\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.704078 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-scripts\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.707660 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-combined-ca-bundle\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.707999 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-scripts\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.711888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-db-sync-config-data\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.715109 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-combined-ca-bundle\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.715188 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-etc-machine-id\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.718012 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-config\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.719402 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-config-data\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.727622 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6jwz\" (UniqueName: \"kubernetes.io/projected/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-kube-api-access-x6jwz\") pod \"cinder-db-sync-rpr97\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.752919 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvg45\" (UniqueName: \"kubernetes.io/projected/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-kube-api-access-zvg45\") pod \"neutron-db-sync-kbkmb\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805630 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805707 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-logs\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805734 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-scripts\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805767 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805790 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mcg\" (UniqueName: \"kubernetes.io/projected/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-kube-api-access-s5mcg\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805822 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-combined-ca-bundle\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805843 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805867 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-config\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805891 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkqq\" (UniqueName: \"kubernetes.io/projected/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-kube-api-access-5lkqq\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805916 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-config-data\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.805939 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.806608 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-logs\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.810099 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-combined-ca-bundle\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.810982 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-config-data\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.812210 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-scripts\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.825958 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkqq\" (UniqueName: \"kubernetes.io/projected/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-kube-api-access-5lkqq\") pod \"placement-db-sync-fkqxj\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.855628 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ldr24" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.871384 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.896310 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rpr97" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.907586 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-config\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.907637 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.907699 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.907753 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.907775 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mcg\" (UniqueName: \"kubernetes.io/projected/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-kube-api-access-s5mcg\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.907811 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.909628 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-config\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.909761 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.910299 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.910884 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.911198 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.924930 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mcg\" (UniqueName: \"kubernetes.io/projected/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-kube-api-access-s5mcg\") pod \"dnsmasq-dns-5dc4fcdbc-6rw7f\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.924997 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fkqxj" Jan 30 05:25:47 crc kubenswrapper[4931]: I0130 05:25:47.971630 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.063954 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-zrpdg"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.189072 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2gl9c"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.195322 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:25:48 crc kubenswrapper[4931]: W0130 05:25:48.223347 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8946f758_7352_4859_a3c3_b98bca9b99e4.slice/crio-c26bd96ed17ec97ce33862cd249a01687dfe99dfd46407b53a72a430d1e772ff WatchSource:0}: Error finding container c26bd96ed17ec97ce33862cd249a01687dfe99dfd46407b53a72a430d1e772ff: Status 404 returned error can't find the container with id c26bd96ed17ec97ce33862cd249a01687dfe99dfd46407b53a72a430d1e772ff Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.253302 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.254630 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.257348 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.257558 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lnq99" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.257670 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.257782 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.269387 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.309254 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.314172 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.321866 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.322087 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.324242 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418322 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418615 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418646 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-config-data\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418668 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418694 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418712 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418746 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-scripts\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418788 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-logs\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418808 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-logs\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418831 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418849 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6xdb\" (UniqueName: \"kubernetes.io/projected/85061f18-4349-447f-b1ca-4a9a54461745-kube-api-access-l6xdb\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418867 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418915 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418935 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2l92\" (UniqueName: \"kubernetes.io/projected/a718b748-698c-44cc-8a28-b66a97405c41-kube-api-access-f2l92\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.418955 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.447888 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ldr24"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529307 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-scripts\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529372 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-logs\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529397 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-logs\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529437 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529462 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6xdb\" (UniqueName: \"kubernetes.io/projected/85061f18-4349-447f-b1ca-4a9a54461745-kube-api-access-l6xdb\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529490 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529524 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529570 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2l92\" (UniqueName: \"kubernetes.io/projected/a718b748-698c-44cc-8a28-b66a97405c41-kube-api-access-f2l92\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529589 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529619 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529638 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529658 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-config-data\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529673 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529695 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.529712 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.532305 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-logs\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.532636 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-logs\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.533055 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.537333 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.538353 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.550846 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.554360 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.560531 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-config-data\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.569177 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.574505 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.574965 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.576209 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-scripts\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.585043 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rpr97"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.590532 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.591050 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.592390 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6xdb\" (UniqueName: \"kubernetes.io/projected/85061f18-4349-447f-b1ca-4a9a54461745-kube-api-access-l6xdb\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.594162 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2l92\" (UniqueName: \"kubernetes.io/projected/a718b748-698c-44cc-8a28-b66a97405c41-kube-api-access-f2l92\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.609679 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.609968 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.662781 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kbkmb"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.680535 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fkqxj"] Jan 30 05:25:48 crc kubenswrapper[4931]: W0130 05:25:48.686831 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3ddcee7_a757_43b5_bf76_552cbd8d9078.slice/crio-52736b18c683d128d17308fceb7bf9f60b140aff7c806fb9ba3a93b56cb26bc0 WatchSource:0}: Error finding container 52736b18c683d128d17308fceb7bf9f60b140aff7c806fb9ba3a93b56cb26bc0: Status 404 returned error can't find the container with id 52736b18c683d128d17308fceb7bf9f60b140aff7c806fb9ba3a93b56cb26bc0 Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.707558 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-6rw7f"] Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.715444 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.881454 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fkqxj" event={"ID":"438fbbb5-a318-4714-9dac-e3f0fc3f63d3","Type":"ContainerStarted","Data":"46af11befd18c4fdbfdc15f44fec26d441cc576260685156c355baab6e60ddb1"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.902843 4931 generic.go:334] "Generic (PLEG): container finished" podID="81259525-e98e-4119-9071-2e17b0fb1640" containerID="869ab78f6d42b66bebe565b222bf7967e6ecee6cf3f268053a0c2c39b7f70563" exitCode=0 Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.902917 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" event={"ID":"81259525-e98e-4119-9071-2e17b0fb1640","Type":"ContainerDied","Data":"869ab78f6d42b66bebe565b222bf7967e6ecee6cf3f268053a0c2c39b7f70563"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.902943 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" event={"ID":"81259525-e98e-4119-9071-2e17b0fb1640","Type":"ContainerStarted","Data":"886b6923a64a16210b17afbe28659527c17a059e7a196a2fd3f76fbb734ff512"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.903367 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.942188 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kbkmb" event={"ID":"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719","Type":"ContainerStarted","Data":"7ad5adedbc116cdb578bc473211ba2fbb992ce127a4e1710f27293fb378d6bd0"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.950591 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ldr24" event={"ID":"f3ddcee7-a757-43b5-bf76-552cbd8d9078","Type":"ContainerStarted","Data":"52736b18c683d128d17308fceb7bf9f60b140aff7c806fb9ba3a93b56cb26bc0"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.955880 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rpr97" event={"ID":"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6","Type":"ContainerStarted","Data":"f838ce1f11e506679d678bae95342cc3dcecec78b2114b17644603c407ad3619"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.962631 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerStarted","Data":"c0144289ab3513de686db41a01bd60e595d46a3f8bcaea66b48e5c2753f90feb"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.976609 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2gl9c" event={"ID":"8946f758-7352-4859-a3c3-b98bca9b99e4","Type":"ContainerStarted","Data":"43be7b60c2a00736cc6eb1df08fb2617062a5a1b05069e0aa41b60294b71b16c"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.976662 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2gl9c" event={"ID":"8946f758-7352-4859-a3c3-b98bca9b99e4","Type":"ContainerStarted","Data":"c26bd96ed17ec97ce33862cd249a01687dfe99dfd46407b53a72a430d1e772ff"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.979999 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" event={"ID":"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089","Type":"ContainerStarted","Data":"ae9e2c438382358c90343f1970628f2e8ea67a2dacc48b9e3c93a331cd67467e"} Jan 30 05:25:48 crc kubenswrapper[4931]: I0130 05:25:48.997651 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2gl9c" podStartSLOduration=1.997635045 podStartE2EDuration="1.997635045s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:48.995528754 +0000 UTC m=+1084.365439011" watchObservedRunningTime="2026-01-30 05:25:48.997635045 +0000 UTC m=+1084.367545302" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.223629 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.265377 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:49 crc kubenswrapper[4931]: W0130 05:25:49.265536 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85061f18_4349_447f_b1ca_4a9a54461745.slice/crio-e854d57483e337fee1187a460568d29dfc2cb253eaa1fc3cbbd817c58be36b42 WatchSource:0}: Error finding container e854d57483e337fee1187a460568d29dfc2cb253eaa1fc3cbbd817c58be36b42: Status 404 returned error can't find the container with id e854d57483e337fee1187a460568d29dfc2cb253eaa1fc3cbbd817c58be36b42 Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.367894 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-svc\") pod \"81259525-e98e-4119-9071-2e17b0fb1640\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.367968 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-sb\") pod \"81259525-e98e-4119-9071-2e17b0fb1640\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.368082 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-nb\") pod \"81259525-e98e-4119-9071-2e17b0fb1640\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.368122 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-config\") pod \"81259525-e98e-4119-9071-2e17b0fb1640\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.368179 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbtg6\" (UniqueName: \"kubernetes.io/projected/81259525-e98e-4119-9071-2e17b0fb1640-kube-api-access-nbtg6\") pod \"81259525-e98e-4119-9071-2e17b0fb1640\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.368210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-swift-storage-0\") pod \"81259525-e98e-4119-9071-2e17b0fb1640\" (UID: \"81259525-e98e-4119-9071-2e17b0fb1640\") " Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.396412 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81259525-e98e-4119-9071-2e17b0fb1640" (UID: "81259525-e98e-4119-9071-2e17b0fb1640"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.397367 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81259525-e98e-4119-9071-2e17b0fb1640-kube-api-access-nbtg6" (OuterVolumeSpecName: "kube-api-access-nbtg6") pod "81259525-e98e-4119-9071-2e17b0fb1640" (UID: "81259525-e98e-4119-9071-2e17b0fb1640"). InnerVolumeSpecName "kube-api-access-nbtg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.400045 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81259525-e98e-4119-9071-2e17b0fb1640" (UID: "81259525-e98e-4119-9071-2e17b0fb1640"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.410855 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81259525-e98e-4119-9071-2e17b0fb1640" (UID: "81259525-e98e-4119-9071-2e17b0fb1640"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.411101 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81259525-e98e-4119-9071-2e17b0fb1640" (UID: "81259525-e98e-4119-9071-2e17b0fb1640"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.432725 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-config" (OuterVolumeSpecName: "config") pod "81259525-e98e-4119-9071-2e17b0fb1640" (UID: "81259525-e98e-4119-9071-2e17b0fb1640"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.470057 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.470088 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.470118 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.470131 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.470139 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbtg6\" (UniqueName: \"kubernetes.io/projected/81259525-e98e-4119-9071-2e17b0fb1640-kube-api-access-nbtg6\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.470149 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81259525-e98e-4119-9071-2e17b0fb1640-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.522659 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.821027 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.870642 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.890227 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.995786 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" event={"ID":"81259525-e98e-4119-9071-2e17b0fb1640","Type":"ContainerDied","Data":"886b6923a64a16210b17afbe28659527c17a059e7a196a2fd3f76fbb734ff512"} Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.995816 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-zrpdg" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.995845 4931 scope.go:117] "RemoveContainer" containerID="869ab78f6d42b66bebe565b222bf7967e6ecee6cf3f268053a0c2c39b7f70563" Jan 30 05:25:49 crc kubenswrapper[4931]: I0130 05:25:49.999412 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a718b748-698c-44cc-8a28-b66a97405c41","Type":"ContainerStarted","Data":"c3f9b84bf435cb31e157c89826a592f8cd6b22ac59d33e952e46664d8ee81ab7"} Jan 30 05:25:50 crc kubenswrapper[4931]: I0130 05:25:50.002612 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kbkmb" event={"ID":"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719","Type":"ContainerStarted","Data":"6a0d3a37541dd8bff3edd7d0762f4af19258be892f0b28a3ee8ffd644ba91460"} Jan 30 05:25:50 crc kubenswrapper[4931]: I0130 05:25:50.006095 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85061f18-4349-447f-b1ca-4a9a54461745","Type":"ContainerStarted","Data":"e854d57483e337fee1187a460568d29dfc2cb253eaa1fc3cbbd817c58be36b42"} Jan 30 05:25:50 crc kubenswrapper[4931]: I0130 05:25:50.009953 4931 generic.go:334] "Generic (PLEG): container finished" podID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerID="2de911fa734d3f7bf71674e62b4beae90797f33e1cefb2483c1ee516fdc3ab44" exitCode=0 Jan 30 05:25:50 crc kubenswrapper[4931]: I0130 05:25:50.010012 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" event={"ID":"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089","Type":"ContainerDied","Data":"2de911fa734d3f7bf71674e62b4beae90797f33e1cefb2483c1ee516fdc3ab44"} Jan 30 05:25:50 crc kubenswrapper[4931]: I0130 05:25:50.057314 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-zrpdg"] Jan 30 05:25:50 crc kubenswrapper[4931]: I0130 05:25:50.084391 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-zrpdg"] Jan 30 05:25:50 crc kubenswrapper[4931]: I0130 05:25:50.092051 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kbkmb" podStartSLOduration=3.09202944 podStartE2EDuration="3.09202944s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:50.04580906 +0000 UTC m=+1085.415719317" watchObservedRunningTime="2026-01-30 05:25:50.09202944 +0000 UTC m=+1085.461939697" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.025842 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85061f18-4349-447f-b1ca-4a9a54461745","Type":"ContainerStarted","Data":"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7"} Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.028122 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85061f18-4349-447f-b1ca-4a9a54461745","Type":"ContainerStarted","Data":"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c"} Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.027062 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-httpd" containerID="cri-o://49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7" gracePeriod=30 Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.026631 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-log" containerID="cri-o://0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c" gracePeriod=30 Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.037895 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" event={"ID":"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089","Type":"ContainerStarted","Data":"00f6a2dd44878296bf4733164be83ce28b802aa8c1f8943860365c668511c527"} Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.038047 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.041112 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a718b748-698c-44cc-8a28-b66a97405c41","Type":"ContainerStarted","Data":"372dbc3e463623e2b9f3493644a607e3b3dd6b5d454b6497db8ef4d380851ed9"} Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.048553 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.048511627 podStartE2EDuration="4.048511627s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:51.0430667 +0000 UTC m=+1086.412976957" watchObservedRunningTime="2026-01-30 05:25:51.048511627 +0000 UTC m=+1086.418421884" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.079250 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" podStartSLOduration=4.07921117 podStartE2EDuration="4.07921117s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:51.072038093 +0000 UTC m=+1086.441948360" watchObservedRunningTime="2026-01-30 05:25:51.07921117 +0000 UTC m=+1086.449121427" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.434770 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81259525-e98e-4119-9071-2e17b0fb1640" path="/var/lib/kubelet/pods/81259525-e98e-4119-9071-2e17b0fb1640/volumes" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.638469 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.808679 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-httpd-run\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809028 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-combined-ca-bundle\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809060 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809090 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-config-data\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809121 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-logs\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809144 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-scripts\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809161 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809217 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6xdb\" (UniqueName: \"kubernetes.io/projected/85061f18-4349-447f-b1ca-4a9a54461745-kube-api-access-l6xdb\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809250 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-public-tls-certs\") pod \"85061f18-4349-447f-b1ca-4a9a54461745\" (UID: \"85061f18-4349-447f-b1ca-4a9a54461745\") " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809557 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.809690 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-logs" (OuterVolumeSpecName: "logs") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.815592 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.817559 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-scripts" (OuterVolumeSpecName: "scripts") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.818627 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85061f18-4349-447f-b1ca-4a9a54461745-kube-api-access-l6xdb" (OuterVolumeSpecName: "kube-api-access-l6xdb") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "kube-api-access-l6xdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.838988 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.862466 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-config-data" (OuterVolumeSpecName: "config-data") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.874195 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "85061f18-4349-447f-b1ca-4a9a54461745" (UID: "85061f18-4349-447f-b1ca-4a9a54461745"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.910942 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6xdb\" (UniqueName: \"kubernetes.io/projected/85061f18-4349-447f-b1ca-4a9a54461745-kube-api-access-l6xdb\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.910980 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.910989 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.911024 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.911036 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.911045 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85061f18-4349-447f-b1ca-4a9a54461745-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.911054 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85061f18-4349-447f-b1ca-4a9a54461745-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:51 crc kubenswrapper[4931]: I0130 05:25:51.948218 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.012587 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.057771 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a718b748-698c-44cc-8a28-b66a97405c41","Type":"ContainerStarted","Data":"9c62317118ee4bc559fe155e8ef7df2b681354b8f9e1e6b6cb1521be8624a39c"} Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.057992 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-log" containerID="cri-o://372dbc3e463623e2b9f3493644a607e3b3dd6b5d454b6497db8ef4d380851ed9" gracePeriod=30 Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.058202 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-httpd" containerID="cri-o://9c62317118ee4bc559fe155e8ef7df2b681354b8f9e1e6b6cb1521be8624a39c" gracePeriod=30 Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.063085 4931 generic.go:334] "Generic (PLEG): container finished" podID="85061f18-4349-447f-b1ca-4a9a54461745" containerID="49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7" exitCode=143 Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.063124 4931 generic.go:334] "Generic (PLEG): container finished" podID="85061f18-4349-447f-b1ca-4a9a54461745" containerID="0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c" exitCode=143 Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.063212 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85061f18-4349-447f-b1ca-4a9a54461745","Type":"ContainerDied","Data":"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7"} Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.063265 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85061f18-4349-447f-b1ca-4a9a54461745","Type":"ContainerDied","Data":"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c"} Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.063321 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"85061f18-4349-447f-b1ca-4a9a54461745","Type":"ContainerDied","Data":"e854d57483e337fee1187a460568d29dfc2cb253eaa1fc3cbbd817c58be36b42"} Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.063341 4931 scope.go:117] "RemoveContainer" containerID="49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.063925 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.093020 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.092996864 podStartE2EDuration="5.092996864s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:25:52.081504644 +0000 UTC m=+1087.451414911" watchObservedRunningTime="2026-01-30 05:25:52.092996864 +0000 UTC m=+1087.462907121" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.115162 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.121604 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.139270 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:52 crc kubenswrapper[4931]: E0130 05:25:52.139685 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-httpd" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.139697 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-httpd" Jan 30 05:25:52 crc kubenswrapper[4931]: E0130 05:25:52.139712 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-log" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.139735 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-log" Jan 30 05:25:52 crc kubenswrapper[4931]: E0130 05:25:52.139747 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81259525-e98e-4119-9071-2e17b0fb1640" containerName="init" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.139752 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="81259525-e98e-4119-9071-2e17b0fb1640" containerName="init" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.140662 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-httpd" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.140692 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="81259525-e98e-4119-9071-2e17b0fb1640" containerName="init" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.140703 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="85061f18-4349-447f-b1ca-4a9a54461745" containerName="glance-log" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.141529 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.145480 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.145544 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.149261 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215176 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-scripts\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215221 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czq79\" (UniqueName: \"kubernetes.io/projected/18f01f64-f6e4-42f3-80f8-27c86f82eeef-kube-api-access-czq79\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215284 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215334 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215361 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-config-data\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215386 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-logs\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215410 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.215462 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316502 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-config-data\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-logs\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316597 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316642 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316720 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-scripts\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316744 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czq79\" (UniqueName: \"kubernetes.io/projected/18f01f64-f6e4-42f3-80f8-27c86f82eeef-kube-api-access-czq79\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316779 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.316826 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.322501 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.322972 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-logs\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.323539 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.329629 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.334191 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.336864 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-config-data\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.341794 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czq79\" (UniqueName: \"kubernetes.io/projected/18f01f64-f6e4-42f3-80f8-27c86f82eeef-kube-api-access-czq79\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.342046 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-scripts\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.348540 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " pod="openstack/glance-default-external-api-0" Jan 30 05:25:52 crc kubenswrapper[4931]: I0130 05:25:52.468179 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:25:53 crc kubenswrapper[4931]: I0130 05:25:53.075266 4931 generic.go:334] "Generic (PLEG): container finished" podID="a718b748-698c-44cc-8a28-b66a97405c41" containerID="9c62317118ee4bc559fe155e8ef7df2b681354b8f9e1e6b6cb1521be8624a39c" exitCode=0 Jan 30 05:25:53 crc kubenswrapper[4931]: I0130 05:25:53.075292 4931 generic.go:334] "Generic (PLEG): container finished" podID="a718b748-698c-44cc-8a28-b66a97405c41" containerID="372dbc3e463623e2b9f3493644a607e3b3dd6b5d454b6497db8ef4d380851ed9" exitCode=143 Jan 30 05:25:53 crc kubenswrapper[4931]: I0130 05:25:53.075318 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a718b748-698c-44cc-8a28-b66a97405c41","Type":"ContainerDied","Data":"9c62317118ee4bc559fe155e8ef7df2b681354b8f9e1e6b6cb1521be8624a39c"} Jan 30 05:25:53 crc kubenswrapper[4931]: I0130 05:25:53.075396 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a718b748-698c-44cc-8a28-b66a97405c41","Type":"ContainerDied","Data":"372dbc3e463623e2b9f3493644a607e3b3dd6b5d454b6497db8ef4d380851ed9"} Jan 30 05:25:53 crc kubenswrapper[4931]: I0130 05:25:53.079978 4931 generic.go:334] "Generic (PLEG): container finished" podID="8946f758-7352-4859-a3c3-b98bca9b99e4" containerID="43be7b60c2a00736cc6eb1df08fb2617062a5a1b05069e0aa41b60294b71b16c" exitCode=0 Jan 30 05:25:53 crc kubenswrapper[4931]: I0130 05:25:53.080019 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2gl9c" event={"ID":"8946f758-7352-4859-a3c3-b98bca9b99e4","Type":"ContainerDied","Data":"43be7b60c2a00736cc6eb1df08fb2617062a5a1b05069e0aa41b60294b71b16c"} Jan 30 05:25:53 crc kubenswrapper[4931]: I0130 05:25:53.457969 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85061f18-4349-447f-b1ca-4a9a54461745" path="/var/lib/kubelet/pods/85061f18-4349-447f-b1ca-4a9a54461745/volumes" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.450699 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.459384 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.604658 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-httpd-run\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605072 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-fernet-keys\") pod \"8946f758-7352-4859-a3c3-b98bca9b99e4\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605277 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-config-data\") pod \"8946f758-7352-4859-a3c3-b98bca9b99e4\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605471 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605490 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wwl\" (UniqueName: \"kubernetes.io/projected/8946f758-7352-4859-a3c3-b98bca9b99e4-kube-api-access-p7wwl\") pod \"8946f758-7352-4859-a3c3-b98bca9b99e4\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605631 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-config-data\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605714 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-credential-keys\") pod \"8946f758-7352-4859-a3c3-b98bca9b99e4\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605773 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-combined-ca-bundle\") pod \"8946f758-7352-4859-a3c3-b98bca9b99e4\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605847 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-scripts\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605888 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-combined-ca-bundle\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.605947 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-internal-tls-certs\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.606012 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-scripts\") pod \"8946f758-7352-4859-a3c3-b98bca9b99e4\" (UID: \"8946f758-7352-4859-a3c3-b98bca9b99e4\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.606044 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-logs\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.606461 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-logs" (OuterVolumeSpecName: "logs") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.606811 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2l92\" (UniqueName: \"kubernetes.io/projected/a718b748-698c-44cc-8a28-b66a97405c41-kube-api-access-f2l92\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.606857 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a718b748-698c-44cc-8a28-b66a97405c41\" (UID: \"a718b748-698c-44cc-8a28-b66a97405c41\") " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.607708 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.607737 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a718b748-698c-44cc-8a28-b66a97405c41-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.611899 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8946f758-7352-4859-a3c3-b98bca9b99e4" (UID: "8946f758-7352-4859-a3c3-b98bca9b99e4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.612679 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-scripts" (OuterVolumeSpecName: "scripts") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.612766 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-scripts" (OuterVolumeSpecName: "scripts") pod "8946f758-7352-4859-a3c3-b98bca9b99e4" (UID: "8946f758-7352-4859-a3c3-b98bca9b99e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.616103 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.625150 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a718b748-698c-44cc-8a28-b66a97405c41-kube-api-access-f2l92" (OuterVolumeSpecName: "kube-api-access-f2l92") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "kube-api-access-f2l92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.624601 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8946f758-7352-4859-a3c3-b98bca9b99e4" (UID: "8946f758-7352-4859-a3c3-b98bca9b99e4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.627085 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8946f758-7352-4859-a3c3-b98bca9b99e4-kube-api-access-p7wwl" (OuterVolumeSpecName: "kube-api-access-p7wwl") pod "8946f758-7352-4859-a3c3-b98bca9b99e4" (UID: "8946f758-7352-4859-a3c3-b98bca9b99e4"). InnerVolumeSpecName "kube-api-access-p7wwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.640861 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.641565 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-config-data" (OuterVolumeSpecName: "config-data") pod "8946f758-7352-4859-a3c3-b98bca9b99e4" (UID: "8946f758-7352-4859-a3c3-b98bca9b99e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.657691 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-config-data" (OuterVolumeSpecName: "config-data") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.660537 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8946f758-7352-4859-a3c3-b98bca9b99e4" (UID: "8946f758-7352-4859-a3c3-b98bca9b99e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.690940 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a718b748-698c-44cc-8a28-b66a97405c41" (UID: "a718b748-698c-44cc-8a28-b66a97405c41"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709602 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709643 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709657 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2l92\" (UniqueName: \"kubernetes.io/projected/a718b748-698c-44cc-8a28-b66a97405c41-kube-api-access-f2l92\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709696 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709710 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709722 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709733 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7wwl\" (UniqueName: \"kubernetes.io/projected/8946f758-7352-4859-a3c3-b98bca9b99e4-kube-api-access-p7wwl\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709744 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709756 4931 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709767 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8946f758-7352-4859-a3c3-b98bca9b99e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709777 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.709789 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a718b748-698c-44cc-8a28-b66a97405c41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.736983 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 05:25:56 crc kubenswrapper[4931]: I0130 05:25:56.811579 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.123710 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a718b748-698c-44cc-8a28-b66a97405c41","Type":"ContainerDied","Data":"c3f9b84bf435cb31e157c89826a592f8cd6b22ac59d33e952e46664d8ee81ab7"} Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.123803 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.129439 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2gl9c" event={"ID":"8946f758-7352-4859-a3c3-b98bca9b99e4","Type":"ContainerDied","Data":"c26bd96ed17ec97ce33862cd249a01687dfe99dfd46407b53a72a430d1e772ff"} Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.129472 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26bd96ed17ec97ce33862cd249a01687dfe99dfd46407b53a72a430d1e772ff" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.129517 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2gl9c" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.164924 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.174997 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.197524 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:57 crc kubenswrapper[4931]: E0130 05:25:57.197915 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-httpd" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.197931 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-httpd" Jan 30 05:25:57 crc kubenswrapper[4931]: E0130 05:25:57.197947 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-log" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.197953 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-log" Jan 30 05:25:57 crc kubenswrapper[4931]: E0130 05:25:57.197966 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8946f758-7352-4859-a3c3-b98bca9b99e4" containerName="keystone-bootstrap" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.197972 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="8946f758-7352-4859-a3c3-b98bca9b99e4" containerName="keystone-bootstrap" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.198131 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-log" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.198141 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a718b748-698c-44cc-8a28-b66a97405c41" containerName="glance-httpd" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.198151 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="8946f758-7352-4859-a3c3-b98bca9b99e4" containerName="keystone-bootstrap" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.199016 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.201622 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.202640 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.208307 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.321949 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.322348 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn59m\" (UniqueName: \"kubernetes.io/projected/97f44787-3f37-44f1-85a5-4acffef71d95-kube-api-access-vn59m\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.322401 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.322472 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.322506 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.322851 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-logs\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.322931 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.323005 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.363124 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.363190 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424146 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424223 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424245 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn59m\" (UniqueName: \"kubernetes.io/projected/97f44787-3f37-44f1-85a5-4acffef71d95-kube-api-access-vn59m\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424285 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424327 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424354 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424431 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-logs\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424462 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.424883 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.432053 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.432724 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-logs\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.436048 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.437174 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a718b748-698c-44cc-8a28-b66a97405c41" path="/var/lib/kubelet/pods/a718b748-698c-44cc-8a28-b66a97405c41/volumes" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.437941 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.438719 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.440135 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.453815 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn59m\" (UniqueName: \"kubernetes.io/projected/97f44787-3f37-44f1-85a5-4acffef71d95-kube-api-access-vn59m\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.458767 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.530111 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.555263 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2gl9c"] Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.562319 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2gl9c"] Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.633085 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-sdn7d"] Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.634344 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.638882 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.638945 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.639121 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.639627 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.639938 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llv5h" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.650841 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sdn7d"] Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.732486 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9zg\" (UniqueName: \"kubernetes.io/projected/2400d2d7-1da5-4a38-a558-c970226f95b9-kube-api-access-6g9zg\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.732558 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-config-data\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.732638 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-combined-ca-bundle\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.732678 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-fernet-keys\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.732703 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-credential-keys\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.732745 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-scripts\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.834729 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g9zg\" (UniqueName: \"kubernetes.io/projected/2400d2d7-1da5-4a38-a558-c970226f95b9-kube-api-access-6g9zg\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.834813 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-config-data\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.834858 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-combined-ca-bundle\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.834887 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-fernet-keys\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.834918 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-credential-keys\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.834972 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-scripts\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.840494 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-fernet-keys\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.841958 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-combined-ca-bundle\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.843822 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-scripts\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.852663 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-credential-keys\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.855326 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-config-data\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.860152 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g9zg\" (UniqueName: \"kubernetes.io/projected/2400d2d7-1da5-4a38-a558-c970226f95b9-kube-api-access-6g9zg\") pod \"keystone-bootstrap-sdn7d\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.960018 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:25:57 crc kubenswrapper[4931]: I0130 05:25:57.973411 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:25:58 crc kubenswrapper[4931]: I0130 05:25:58.037923 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-8c6pt"] Jan 30 05:25:58 crc kubenswrapper[4931]: I0130 05:25:58.038330 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="dnsmasq-dns" containerID="cri-o://566ade23da173169f793e0bfc68dfb7fc94d967bbb01c1ecaa6d6c7476150a48" gracePeriod=10 Jan 30 05:25:58 crc kubenswrapper[4931]: I0130 05:25:58.299086 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Jan 30 05:25:59 crc kubenswrapper[4931]: I0130 05:25:59.156706 4931 generic.go:334] "Generic (PLEG): container finished" podID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerID="566ade23da173169f793e0bfc68dfb7fc94d967bbb01c1ecaa6d6c7476150a48" exitCode=0 Jan 30 05:25:59 crc kubenswrapper[4931]: I0130 05:25:59.156846 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" event={"ID":"be176172-3d0c-47ae-aa98-d7ee20022f44","Type":"ContainerDied","Data":"566ade23da173169f793e0bfc68dfb7fc94d967bbb01c1ecaa6d6c7476150a48"} Jan 30 05:25:59 crc kubenswrapper[4931]: I0130 05:25:59.439235 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8946f758-7352-4859-a3c3-b98bca9b99e4" path="/var/lib/kubelet/pods/8946f758-7352-4859-a3c3-b98bca9b99e4/volumes" Jan 30 05:26:00 crc kubenswrapper[4931]: I0130 05:26:00.160253 4931 scope.go:117] "RemoveContainer" containerID="0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c" Jan 30 05:26:01 crc kubenswrapper[4931]: E0130 05:26:01.098112 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 30 05:26:01 crc kubenswrapper[4931]: E0130 05:26:01.098728 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n698h544h5d6h5f4hcbh5d4h5fh579hch65ch566h667h64fh56bh5f8hf4h59dh557h8dh54fh8h557h684h667h575h5f4hd6h686h64ch686h644h5bdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nl9x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(debbaca0-0d1f-47cd-bb8e-8e09e4a65307): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 05:26:03 crc kubenswrapper[4931]: I0130 05:26:03.298866 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Jan 30 05:26:08 crc kubenswrapper[4931]: I0130 05:26:08.298944 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Jan 30 05:26:08 crc kubenswrapper[4931]: I0130 05:26:08.299668 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.449413 4931 scope.go:117] "RemoveContainer" containerID="49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7" Jan 30 05:26:10 crc kubenswrapper[4931]: E0130 05:26:10.450371 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7\": container with ID starting with 49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7 not found: ID does not exist" containerID="49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.450414 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7"} err="failed to get container status \"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7\": rpc error: code = NotFound desc = could not find container \"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7\": container with ID starting with 49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7 not found: ID does not exist" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.450464 4931 scope.go:117] "RemoveContainer" containerID="0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c" Jan 30 05:26:10 crc kubenswrapper[4931]: E0130 05:26:10.450733 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c\": container with ID starting with 0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c not found: ID does not exist" containerID="0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.450753 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c"} err="failed to get container status \"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c\": rpc error: code = NotFound desc = could not find container \"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c\": container with ID starting with 0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c not found: ID does not exist" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.450765 4931 scope.go:117] "RemoveContainer" containerID="49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.450957 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7"} err="failed to get container status \"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7\": rpc error: code = NotFound desc = could not find container \"49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7\": container with ID starting with 49b76283e35cae85c5b79682a6f7a6d67465b0ab8bd0c34c0e7867dac854fcf7 not found: ID does not exist" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.450971 4931 scope.go:117] "RemoveContainer" containerID="0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.451168 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c"} err="failed to get container status \"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c\": rpc error: code = NotFound desc = could not find container \"0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c\": container with ID starting with 0e5c240e7423824a5311e12dfc17b9311e67c5ad4c50c5d91193f61d25e11d1c not found: ID does not exist" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.451182 4931 scope.go:117] "RemoveContainer" containerID="9c62317118ee4bc559fe155e8ef7df2b681354b8f9e1e6b6cb1521be8624a39c" Jan 30 05:26:10 crc kubenswrapper[4931]: E0130 05:26:10.458039 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 30 05:26:10 crc kubenswrapper[4931]: E0130 05:26:10.458523 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6jwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-rpr97_openstack(6dd6723b-baf8-47eb-a774-68a5dfbcc4a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 05:26:10 crc kubenswrapper[4931]: E0130 05:26:10.460597 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-rpr97" podUID="6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.562613 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.697299 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-svc\") pod \"be176172-3d0c-47ae-aa98-d7ee20022f44\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.697492 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-sb\") pod \"be176172-3d0c-47ae-aa98-d7ee20022f44\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.697554 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-swift-storage-0\") pod \"be176172-3d0c-47ae-aa98-d7ee20022f44\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.697580 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-nb\") pod \"be176172-3d0c-47ae-aa98-d7ee20022f44\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.697605 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-config\") pod \"be176172-3d0c-47ae-aa98-d7ee20022f44\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.697620 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p52jc\" (UniqueName: \"kubernetes.io/projected/be176172-3d0c-47ae-aa98-d7ee20022f44-kube-api-access-p52jc\") pod \"be176172-3d0c-47ae-aa98-d7ee20022f44\" (UID: \"be176172-3d0c-47ae-aa98-d7ee20022f44\") " Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.731863 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be176172-3d0c-47ae-aa98-d7ee20022f44-kube-api-access-p52jc" (OuterVolumeSpecName: "kube-api-access-p52jc") pod "be176172-3d0c-47ae-aa98-d7ee20022f44" (UID: "be176172-3d0c-47ae-aa98-d7ee20022f44"). InnerVolumeSpecName "kube-api-access-p52jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.753284 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-config" (OuterVolumeSpecName: "config") pod "be176172-3d0c-47ae-aa98-d7ee20022f44" (UID: "be176172-3d0c-47ae-aa98-d7ee20022f44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.755371 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be176172-3d0c-47ae-aa98-d7ee20022f44" (UID: "be176172-3d0c-47ae-aa98-d7ee20022f44"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.755455 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be176172-3d0c-47ae-aa98-d7ee20022f44" (UID: "be176172-3d0c-47ae-aa98-d7ee20022f44"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.758266 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be176172-3d0c-47ae-aa98-d7ee20022f44" (UID: "be176172-3d0c-47ae-aa98-d7ee20022f44"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.767250 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be176172-3d0c-47ae-aa98-d7ee20022f44" (UID: "be176172-3d0c-47ae-aa98-d7ee20022f44"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.800470 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.800521 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.800532 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p52jc\" (UniqueName: \"kubernetes.io/projected/be176172-3d0c-47ae-aa98-d7ee20022f44-kube-api-access-p52jc\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.800543 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.800551 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.800558 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be176172-3d0c-47ae-aa98-d7ee20022f44-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.887906 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.900457 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sdn7d"] Jan 30 05:26:10 crc kubenswrapper[4931]: W0130 05:26:10.960385 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2400d2d7_1da5_4a38_a558_c970226f95b9.slice/crio-e46a6a605ff6b2a20359cac2b9ef2f718cb5499b26c58dd581090cd69c65e837 WatchSource:0}: Error finding container e46a6a605ff6b2a20359cac2b9ef2f718cb5499b26c58dd581090cd69c65e837: Status 404 returned error can't find the container with id e46a6a605ff6b2a20359cac2b9ef2f718cb5499b26c58dd581090cd69c65e837 Jan 30 05:26:10 crc kubenswrapper[4931]: I0130 05:26:10.972937 4931 scope.go:117] "RemoveContainer" containerID="372dbc3e463623e2b9f3493644a607e3b3dd6b5d454b6497db8ef4d380851ed9" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.286046 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18f01f64-f6e4-42f3-80f8-27c86f82eeef","Type":"ContainerStarted","Data":"f169988e956408b39f47bea60212630dcedf5b4c3315a89463a6589988357590"} Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.290339 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ldr24" event={"ID":"f3ddcee7-a757-43b5-bf76-552cbd8d9078","Type":"ContainerStarted","Data":"5737a395d9d729d9146705b5cbd342fcc1f4ff9a1712777cd6b02a06ccdce9e4"} Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.299500 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerStarted","Data":"ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418"} Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.306354 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ldr24" podStartSLOduration=2.522752548 podStartE2EDuration="24.306332127s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="2026-01-30 05:25:48.689544881 +0000 UTC m=+1084.059455138" lastFinishedPulling="2026-01-30 05:26:10.47312446 +0000 UTC m=+1105.843034717" observedRunningTime="2026-01-30 05:26:11.30386784 +0000 UTC m=+1106.673778097" watchObservedRunningTime="2026-01-30 05:26:11.306332127 +0000 UTC m=+1106.676242384" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.307777 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdn7d" event={"ID":"2400d2d7-1da5-4a38-a558-c970226f95b9","Type":"ContainerStarted","Data":"703277ac00408ad9f7d1f58fc77ac68f5eef4a1090a051f9ca88ddf484b5fda4"} Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.307820 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdn7d" event={"ID":"2400d2d7-1da5-4a38-a558-c970226f95b9","Type":"ContainerStarted","Data":"e46a6a605ff6b2a20359cac2b9ef2f718cb5499b26c58dd581090cd69c65e837"} Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.315041 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.315071 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-8c6pt" event={"ID":"be176172-3d0c-47ae-aa98-d7ee20022f44","Type":"ContainerDied","Data":"af809bcfb9bcd948f444820cb7e724048ff5c243bf6772c74d31c5eab0630ea9"} Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.315163 4931 scope.go:117] "RemoveContainer" containerID="566ade23da173169f793e0bfc68dfb7fc94d967bbb01c1ecaa6d6c7476150a48" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.316908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fkqxj" event={"ID":"438fbbb5-a318-4714-9dac-e3f0fc3f63d3","Type":"ContainerStarted","Data":"d39f6f20169ccd7e0eea2f20181dc418bff8322ca21fcf82d4f5c3d022992a6f"} Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.327157 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-sdn7d" podStartSLOduration=14.327143485 podStartE2EDuration="14.327143485s" podCreationTimestamp="2026-01-30 05:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:11.323069588 +0000 UTC m=+1106.692979865" watchObservedRunningTime="2026-01-30 05:26:11.327143485 +0000 UTC m=+1106.697053742" Jan 30 05:26:11 crc kubenswrapper[4931]: E0130 05:26:11.329199 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-rpr97" podUID="6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.344223 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-fkqxj" podStartSLOduration=2.612813966 podStartE2EDuration="24.344207687s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="2026-01-30 05:25:48.726381481 +0000 UTC m=+1084.096291738" lastFinishedPulling="2026-01-30 05:26:10.457775202 +0000 UTC m=+1105.827685459" observedRunningTime="2026-01-30 05:26:11.338458778 +0000 UTC m=+1106.708369035" watchObservedRunningTime="2026-01-30 05:26:11.344207687 +0000 UTC m=+1106.714117944" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.374811 4931 scope.go:117] "RemoveContainer" containerID="aa6a0d8cd249f8b0104844bcd59d7c80f0ef6c784ec9f9d65e07215bbb280738" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.389462 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-8c6pt"] Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.394913 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-8c6pt"] Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.441652 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" path="/var/lib/kubelet/pods/be176172-3d0c-47ae-aa98-d7ee20022f44/volumes" Jan 30 05:26:11 crc kubenswrapper[4931]: I0130 05:26:11.811869 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.337515 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18f01f64-f6e4-42f3-80f8-27c86f82eeef","Type":"ContainerStarted","Data":"6f20ab78e04ca2466a780c5cc51a4b37e0f487abee57f4e067c29bab7787be5d"} Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.337558 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18f01f64-f6e4-42f3-80f8-27c86f82eeef","Type":"ContainerStarted","Data":"d2ded790b556dc13af017d23c970f8fe6d49472a2741355949522d19b2e1e389"} Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.344652 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97f44787-3f37-44f1-85a5-4acffef71d95","Type":"ContainerStarted","Data":"c3997c6bebfa178d2d159b7c46082a54c7bc989ec2ee2d507189f6cfa3f09d57"} Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.344699 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97f44787-3f37-44f1-85a5-4acffef71d95","Type":"ContainerStarted","Data":"7bfff4eea4487971b7e050b186c84e3209413100130292fb4b6aba07f7e36bce"} Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.366409 4931 generic.go:334] "Generic (PLEG): container finished" podID="2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" containerID="6a0d3a37541dd8bff3edd7d0762f4af19258be892f0b28a3ee8ffd644ba91460" exitCode=0 Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.367195 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kbkmb" event={"ID":"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719","Type":"ContainerDied","Data":"6a0d3a37541dd8bff3edd7d0762f4af19258be892f0b28a3ee8ffd644ba91460"} Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.376861 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=20.376838133 podStartE2EDuration="20.376838133s" podCreationTimestamp="2026-01-30 05:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:12.364515219 +0000 UTC m=+1107.734425496" watchObservedRunningTime="2026-01-30 05:26:12.376838133 +0000 UTC m=+1107.746748390" Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.468973 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.469210 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.504344 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 05:26:12 crc kubenswrapper[4931]: I0130 05:26:12.510246 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 05:26:13 crc kubenswrapper[4931]: I0130 05:26:13.376780 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97f44787-3f37-44f1-85a5-4acffef71d95","Type":"ContainerStarted","Data":"4dc106a9347c18b30457fcfe0ba0955c89ca0037e4655a069609d973aaa2c8d2"} Jan 30 05:26:13 crc kubenswrapper[4931]: I0130 05:26:13.377106 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 05:26:13 crc kubenswrapper[4931]: I0130 05:26:13.377135 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 05:26:13 crc kubenswrapper[4931]: I0130 05:26:13.412658 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.412639897 podStartE2EDuration="16.412639897s" podCreationTimestamp="2026-01-30 05:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:13.400724665 +0000 UTC m=+1108.770634942" watchObservedRunningTime="2026-01-30 05:26:13.412639897 +0000 UTC m=+1108.782550154" Jan 30 05:26:15 crc kubenswrapper[4931]: I0130 05:26:15.435701 4931 generic.go:334] "Generic (PLEG): container finished" podID="2400d2d7-1da5-4a38-a558-c970226f95b9" containerID="703277ac00408ad9f7d1f58fc77ac68f5eef4a1090a051f9ca88ddf484b5fda4" exitCode=0 Jan 30 05:26:15 crc kubenswrapper[4931]: I0130 05:26:15.435801 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdn7d" event={"ID":"2400d2d7-1da5-4a38-a558-c970226f95b9","Type":"ContainerDied","Data":"703277ac00408ad9f7d1f58fc77ac68f5eef4a1090a051f9ca88ddf484b5fda4"} Jan 30 05:26:15 crc kubenswrapper[4931]: I0130 05:26:15.438548 4931 generic.go:334] "Generic (PLEG): container finished" podID="438fbbb5-a318-4714-9dac-e3f0fc3f63d3" containerID="d39f6f20169ccd7e0eea2f20181dc418bff8322ca21fcf82d4f5c3d022992a6f" exitCode=0 Jan 30 05:26:15 crc kubenswrapper[4931]: I0130 05:26:15.438579 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fkqxj" event={"ID":"438fbbb5-a318-4714-9dac-e3f0fc3f63d3","Type":"ContainerDied","Data":"d39f6f20169ccd7e0eea2f20181dc418bff8322ca21fcf82d4f5c3d022992a6f"} Jan 30 05:26:16 crc kubenswrapper[4931]: I0130 05:26:16.449024 4931 generic.go:334] "Generic (PLEG): container finished" podID="f3ddcee7-a757-43b5-bf76-552cbd8d9078" containerID="5737a395d9d729d9146705b5cbd342fcc1f4ff9a1712777cd6b02a06ccdce9e4" exitCode=0 Jan 30 05:26:16 crc kubenswrapper[4931]: I0130 05:26:16.449107 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ldr24" event={"ID":"f3ddcee7-a757-43b5-bf76-552cbd8d9078","Type":"ContainerDied","Data":"5737a395d9d729d9146705b5cbd342fcc1f4ff9a1712777cd6b02a06ccdce9e4"} Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.466031 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kbkmb" event={"ID":"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719","Type":"ContainerDied","Data":"7ad5adedbc116cdb578bc473211ba2fbb992ce127a4e1710f27293fb378d6bd0"} Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.468282 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ad5adedbc116cdb578bc473211ba2fbb992ce127a4e1710f27293fb378d6bd0" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.471176 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sdn7d" event={"ID":"2400d2d7-1da5-4a38-a558-c970226f95b9","Type":"ContainerDied","Data":"e46a6a605ff6b2a20359cac2b9ef2f718cb5499b26c58dd581090cd69c65e837"} Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.471239 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e46a6a605ff6b2a20359cac2b9ef2f718cb5499b26c58dd581090cd69c65e837" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.479781 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fkqxj" event={"ID":"438fbbb5-a318-4714-9dac-e3f0fc3f63d3","Type":"ContainerDied","Data":"46af11befd18c4fdbfdc15f44fec26d441cc576260685156c355baab6e60ddb1"} Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.479840 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46af11befd18c4fdbfdc15f44fec26d441cc576260685156c355baab6e60ddb1" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.530386 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.530502 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.566040 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.581735 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.617841 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.648236 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fkqxj" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.666882 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724378 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-config-data\") pod \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724725 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-logs\") pod \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724748 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-credential-keys\") pod \"2400d2d7-1da5-4a38-a558-c970226f95b9\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724780 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-scripts\") pod \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724811 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lkqq\" (UniqueName: \"kubernetes.io/projected/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-kube-api-access-5lkqq\") pod \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724836 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-combined-ca-bundle\") pod \"2400d2d7-1da5-4a38-a558-c970226f95b9\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724861 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-config-data\") pod \"2400d2d7-1da5-4a38-a558-c970226f95b9\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724881 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-config\") pod \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724913 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-scripts\") pod \"2400d2d7-1da5-4a38-a558-c970226f95b9\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724941 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-combined-ca-bundle\") pod \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.724985 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvg45\" (UniqueName: \"kubernetes.io/projected/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-kube-api-access-zvg45\") pod \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\" (UID: \"2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.725001 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-fernet-keys\") pod \"2400d2d7-1da5-4a38-a558-c970226f95b9\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.725050 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-combined-ca-bundle\") pod \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\" (UID: \"438fbbb5-a318-4714-9dac-e3f0fc3f63d3\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.725075 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g9zg\" (UniqueName: \"kubernetes.io/projected/2400d2d7-1da5-4a38-a558-c970226f95b9-kube-api-access-6g9zg\") pod \"2400d2d7-1da5-4a38-a558-c970226f95b9\" (UID: \"2400d2d7-1da5-4a38-a558-c970226f95b9\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.727022 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-logs" (OuterVolumeSpecName: "logs") pod "438fbbb5-a318-4714-9dac-e3f0fc3f63d3" (UID: "438fbbb5-a318-4714-9dac-e3f0fc3f63d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.729769 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-kube-api-access-5lkqq" (OuterVolumeSpecName: "kube-api-access-5lkqq") pod "438fbbb5-a318-4714-9dac-e3f0fc3f63d3" (UID: "438fbbb5-a318-4714-9dac-e3f0fc3f63d3"). InnerVolumeSpecName "kube-api-access-5lkqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.730541 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-scripts" (OuterVolumeSpecName: "scripts") pod "438fbbb5-a318-4714-9dac-e3f0fc3f63d3" (UID: "438fbbb5-a318-4714-9dac-e3f0fc3f63d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.732132 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2400d2d7-1da5-4a38-a558-c970226f95b9-kube-api-access-6g9zg" (OuterVolumeSpecName: "kube-api-access-6g9zg") pod "2400d2d7-1da5-4a38-a558-c970226f95b9" (UID: "2400d2d7-1da5-4a38-a558-c970226f95b9"). InnerVolumeSpecName "kube-api-access-6g9zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.732844 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-kube-api-access-zvg45" (OuterVolumeSpecName: "kube-api-access-zvg45") pod "2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" (UID: "2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719"). InnerVolumeSpecName "kube-api-access-zvg45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.748126 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2400d2d7-1da5-4a38-a558-c970226f95b9" (UID: "2400d2d7-1da5-4a38-a558-c970226f95b9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.763513 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-config-data" (OuterVolumeSpecName: "config-data") pod "438fbbb5-a318-4714-9dac-e3f0fc3f63d3" (UID: "438fbbb5-a318-4714-9dac-e3f0fc3f63d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.763597 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-scripts" (OuterVolumeSpecName: "scripts") pod "2400d2d7-1da5-4a38-a558-c970226f95b9" (UID: "2400d2d7-1da5-4a38-a558-c970226f95b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.763611 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2400d2d7-1da5-4a38-a558-c970226f95b9" (UID: "2400d2d7-1da5-4a38-a558-c970226f95b9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.767075 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "438fbbb5-a318-4714-9dac-e3f0fc3f63d3" (UID: "438fbbb5-a318-4714-9dac-e3f0fc3f63d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.768679 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-config" (OuterVolumeSpecName: "config") pod "2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" (UID: "2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.770140 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-config-data" (OuterVolumeSpecName: "config-data") pod "2400d2d7-1da5-4a38-a558-c970226f95b9" (UID: "2400d2d7-1da5-4a38-a558-c970226f95b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.771091 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" (UID: "2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.784873 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2400d2d7-1da5-4a38-a558-c970226f95b9" (UID: "2400d2d7-1da5-4a38-a558-c970226f95b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.795225 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ldr24" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.833845 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-db-sync-config-data\") pod \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.834066 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p852h\" (UniqueName: \"kubernetes.io/projected/f3ddcee7-a757-43b5-bf76-552cbd8d9078-kube-api-access-p852h\") pod \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.834099 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-combined-ca-bundle\") pod \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\" (UID: \"f3ddcee7-a757-43b5-bf76-552cbd8d9078\") " Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.837125 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3ddcee7-a757-43b5-bf76-552cbd8d9078" (UID: "f3ddcee7-a757-43b5-bf76-552cbd8d9078"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.837691 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ddcee7-a757-43b5-bf76-552cbd8d9078-kube-api-access-p852h" (OuterVolumeSpecName: "kube-api-access-p852h") pod "f3ddcee7-a757-43b5-bf76-552cbd8d9078" (UID: "f3ddcee7-a757-43b5-bf76-552cbd8d9078"). InnerVolumeSpecName "kube-api-access-p852h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.844968 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvg45\" (UniqueName: \"kubernetes.io/projected/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-kube-api-access-zvg45\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.845092 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.845170 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.845239 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g9zg\" (UniqueName: \"kubernetes.io/projected/2400d2d7-1da5-4a38-a558-c970226f95b9-kube-api-access-6g9zg\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.845481 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846333 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846507 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846596 4931 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846694 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846767 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lkqq\" (UniqueName: \"kubernetes.io/projected/438fbbb5-a318-4714-9dac-e3f0fc3f63d3-kube-api-access-5lkqq\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846846 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846918 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.846985 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.847052 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2400d2d7-1da5-4a38-a558-c970226f95b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.847129 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.847205 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p852h\" (UniqueName: \"kubernetes.io/projected/f3ddcee7-a757-43b5-bf76-552cbd8d9078-kube-api-access-p852h\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.863237 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3ddcee7-a757-43b5-bf76-552cbd8d9078" (UID: "f3ddcee7-a757-43b5-bf76-552cbd8d9078"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:17 crc kubenswrapper[4931]: I0130 05:26:17.948873 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ddcee7-a757-43b5-bf76-552cbd8d9078-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.491342 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerStarted","Data":"c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547"} Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.494999 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ldr24" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.495132 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sdn7d" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.495115 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ldr24" event={"ID":"f3ddcee7-a757-43b5-bf76-552cbd8d9078","Type":"ContainerDied","Data":"52736b18c683d128d17308fceb7bf9f60b140aff7c806fb9ba3a93b56cb26bc0"} Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.495220 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52736b18c683d128d17308fceb7bf9f60b140aff7c806fb9ba3a93b56cb26bc0" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.495330 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fkqxj" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.496285 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kbkmb" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.496515 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.496602 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.828733 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-c7z8q"] Jan 30 05:26:18 crc kubenswrapper[4931]: E0130 05:26:18.829041 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" containerName="neutron-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829056 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" containerName="neutron-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: E0130 05:26:18.829067 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ddcee7-a757-43b5-bf76-552cbd8d9078" containerName="barbican-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829073 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ddcee7-a757-43b5-bf76-552cbd8d9078" containerName="barbican-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: E0130 05:26:18.829085 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438fbbb5-a318-4714-9dac-e3f0fc3f63d3" containerName="placement-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829092 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="438fbbb5-a318-4714-9dac-e3f0fc3f63d3" containerName="placement-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: E0130 05:26:18.829100 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="init" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829105 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="init" Jan 30 05:26:18 crc kubenswrapper[4931]: E0130 05:26:18.829120 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="dnsmasq-dns" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829126 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="dnsmasq-dns" Jan 30 05:26:18 crc kubenswrapper[4931]: E0130 05:26:18.829139 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2400d2d7-1da5-4a38-a558-c970226f95b9" containerName="keystone-bootstrap" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829145 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2400d2d7-1da5-4a38-a558-c970226f95b9" containerName="keystone-bootstrap" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829293 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="438fbbb5-a318-4714-9dac-e3f0fc3f63d3" containerName="placement-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829310 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ddcee7-a757-43b5-bf76-552cbd8d9078" containerName="barbican-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829318 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="be176172-3d0c-47ae-aa98-d7ee20022f44" containerName="dnsmasq-dns" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829332 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2400d2d7-1da5-4a38-a558-c970226f95b9" containerName="keystone-bootstrap" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.829340 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" containerName="neutron-db-sync" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.830135 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.865549 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72vp\" (UniqueName: \"kubernetes.io/projected/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-kube-api-access-q72vp\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.865635 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-svc\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.865684 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.865701 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-config\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.865759 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.865784 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.872837 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-c7z8q"] Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.879688 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-67465d5765-cp74w"] Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.881044 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.884493 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.885459 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vt49t" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.890289 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.946477 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67465d5765-cp74w"] Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.967481 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-f7d589966-mkfs5"] Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968535 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vl6j\" (UniqueName: \"kubernetes.io/projected/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-kube-api-access-8vl6j\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968593 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-logs\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968662 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72vp\" (UniqueName: \"kubernetes.io/projected/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-kube-api-access-q72vp\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968724 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-svc\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968762 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968782 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-config\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968807 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data-custom\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968829 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968842 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968894 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-combined-ca-bundle\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.968919 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.969834 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.970164 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.970414 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-config\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.970930 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-svc\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.971102 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-97bdbd495-2prdt"] Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.971300 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.974935 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.981273 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.987329 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.992632 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.992865 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-llv5h" Jan 30 05:26:18 crc kubenswrapper[4931]: I0130 05:26:18.992982 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.004681 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.009504 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.018928 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-97bdbd495-2prdt"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.032280 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72vp\" (UniqueName: \"kubernetes.io/projected/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-kube-api-access-q72vp\") pod \"dnsmasq-dns-6554f656b5-c7z8q\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.054483 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f7d589966-mkfs5"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070382 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data-custom\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070472 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data-custom\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070498 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-fernet-keys\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070517 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070540 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-combined-ca-bundle\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070574 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807d8709-a403-4186-83f5-ec76aee793fe-logs\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070598 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-combined-ca-bundle\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070617 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx75c\" (UniqueName: \"kubernetes.io/projected/2d6e5156-6e75-4dff-a322-b3d43e596c7e-kube-api-access-hx75c\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070641 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrvbk\" (UniqueName: \"kubernetes.io/projected/807d8709-a403-4186-83f5-ec76aee793fe-kube-api-access-jrvbk\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070689 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vl6j\" (UniqueName: \"kubernetes.io/projected/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-kube-api-access-8vl6j\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070711 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-logs\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.070740 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-public-tls-certs\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.071204 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-logs\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.071276 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-internal-tls-certs\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.071934 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-combined-ca-bundle\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.071988 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-scripts\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.072011 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-credential-keys\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.072065 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-config-data\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.072157 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.079917 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-combined-ca-bundle\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.085338 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data-custom\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.089825 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.121184 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vl6j\" (UniqueName: \"kubernetes.io/projected/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-kube-api-access-8vl6j\") pod \"barbican-worker-67465d5765-cp74w\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.134812 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d9d68b44b-5gp25"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.136091 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.147401 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.147692 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.147803 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.147903 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fttzx" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.148038 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.156203 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176480 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-combined-ca-bundle\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176532 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bz2z\" (UniqueName: \"kubernetes.io/projected/b92991ff-5b79-452a-b5ac-9dc90ab42f68-kube-api-access-9bz2z\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176556 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807d8709-a403-4186-83f5-ec76aee793fe-logs\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx75c\" (UniqueName: \"kubernetes.io/projected/2d6e5156-6e75-4dff-a322-b3d43e596c7e-kube-api-access-hx75c\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176600 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrvbk\" (UniqueName: \"kubernetes.io/projected/807d8709-a403-4186-83f5-ec76aee793fe-kube-api-access-jrvbk\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176617 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-internal-tls-certs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176632 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-scripts\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176663 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-public-tls-certs\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176679 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-config-data\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176694 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-internal-tls-certs\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176715 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-combined-ca-bundle\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176732 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-scripts\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176749 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-credential-keys\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176764 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92991ff-5b79-452a-b5ac-9dc90ab42f68-logs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176790 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-config-data\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176807 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-public-tls-certs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176837 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176856 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data-custom\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176877 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-combined-ca-bundle\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.176907 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-fernet-keys\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.181970 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-internal-tls-certs\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.184883 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9d68b44b-5gp25"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.185641 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807d8709-a403-4186-83f5-ec76aee793fe-logs\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.191826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data-custom\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.194440 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-scripts\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.194786 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.198343 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-combined-ca-bundle\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.198862 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-config-data\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.198871 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-fernet-keys\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.199400 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-combined-ca-bundle\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.200897 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-credential-keys\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.209295 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-public-tls-certs\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.209627 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.237514 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx75c\" (UniqueName: \"kubernetes.io/projected/2d6e5156-6e75-4dff-a322-b3d43e596c7e-kube-api-access-hx75c\") pod \"keystone-97bdbd495-2prdt\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.239508 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrvbk\" (UniqueName: \"kubernetes.io/projected/807d8709-a403-4186-83f5-ec76aee793fe-kube-api-access-jrvbk\") pod \"barbican-keystone-listener-f7d589966-mkfs5\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.272538 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6665f9d796-74mbd"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.277148 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.278977 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-internal-tls-certs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.279030 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-scripts\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.279074 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-config-data\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.279106 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92991ff-5b79-452a-b5ac-9dc90ab42f68-logs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.279133 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-public-tls-certs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.279185 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-combined-ca-bundle\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.279232 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bz2z\" (UniqueName: \"kubernetes.io/projected/b92991ff-5b79-452a-b5ac-9dc90ab42f68-kube-api-access-9bz2z\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.283774 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92991ff-5b79-452a-b5ac-9dc90ab42f68-logs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.287970 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-internal-tls-certs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.292602 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-combined-ca-bundle\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.292880 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.294385 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.295069 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-config-data\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.297015 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-scripts\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.297470 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-public-tls-certs\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.309242 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6665f9d796-74mbd"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.309664 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.338462 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dc49c789d-5gcj4"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.339781 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.342707 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.343280 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.343361 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ffrzt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.365131 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.366696 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-c7z8q"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.370232 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bz2z\" (UniqueName: \"kubernetes.io/projected/b92991ff-5b79-452a-b5ac-9dc90ab42f68-kube-api-access-9bz2z\") pod \"placement-d9d68b44b-5gp25\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403466 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9lwj\" (UniqueName: \"kubernetes.io/projected/28e3fd91-5906-4368-b156-e0d60f3c268e-kube-api-access-f9lwj\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403533 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-ovndb-tls-certs\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403603 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e3fd91-5906-4368-b156-e0d60f3c268e-logs\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403624 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-config\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403703 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-combined-ca-bundle\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403749 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-combined-ca-bundle\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403801 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgbtk\" (UniqueName: \"kubernetes.io/projected/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-kube-api-access-fgbtk\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403817 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data-custom\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403843 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-httpd-config\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.403943 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.515819 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9lwj\" (UniqueName: \"kubernetes.io/projected/28e3fd91-5906-4368-b156-e0d60f3c268e-kube-api-access-f9lwj\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.515869 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-ovndb-tls-certs\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.515903 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e3fd91-5906-4368-b156-e0d60f3c268e-logs\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.515921 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-config\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.515958 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-combined-ca-bundle\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.515982 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-combined-ca-bundle\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.516008 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data-custom\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.516021 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgbtk\" (UniqueName: \"kubernetes.io/projected/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-kube-api-access-fgbtk\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.516038 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-httpd-config\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.516078 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.527581 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e3fd91-5906-4368-b156-e0d60f3c268e-logs\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.535281 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data-custom\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.542587 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.544998 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-httpd-config\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.545435 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-combined-ca-bundle\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.545809 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.551015 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-ovndb-tls-certs\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.554107 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-config\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.564476 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgbtk\" (UniqueName: \"kubernetes.io/projected/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-kube-api-access-fgbtk\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.565461 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc49c789d-5gcj4"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.565654 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-zmls6"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.567820 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.619240 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-combined-ca-bundle\") pod \"neutron-dc49c789d-5gcj4\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.620567 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9lwj\" (UniqueName: \"kubernetes.io/projected/28e3fd91-5906-4368-b156-e0d60f3c268e-kube-api-access-f9lwj\") pod \"barbican-api-6665f9d796-74mbd\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.631479 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-zmls6"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.647071 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f5d456c6b-66jxb"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.650057 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.664964 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7c996f77-c9rqm"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.666485 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.675704 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c996f77-c9rqm"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.690019 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f5d456c6b-66jxb"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.725242 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.729559 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.729630 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.729701 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lmn\" (UniqueName: \"kubernetes.io/projected/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-kube-api-access-z2lmn\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.729875 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-config\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.729948 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.729969 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.738496 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-687c697484-j2btt"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.739948 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.748924 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.754974 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-798b7dc5fb-xl2zq"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.762445 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.764824 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-794bfbdd44-9msr6"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.767412 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.772338 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-687c697484-j2btt"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.784541 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-798b7dc5fb-xl2zq"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831483 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-ovndb-tls-certs\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831746 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-combined-ca-bundle\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831769 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data-custom\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831787 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-combined-ca-bundle\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831805 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831824 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-config\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831842 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831872 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831891 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-combined-ca-bundle\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831918 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7729e2d8-6c8c-4759-9e5d-535ad1586f47-logs\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831937 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2e60-915e-4447-9465-aa64f7f5c7bb-logs\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831954 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpv7q\" (UniqueName: \"kubernetes.io/projected/7729e2d8-6c8c-4759-9e5d-535ad1586f47-kube-api-access-vpv7q\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.831972 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2lmn\" (UniqueName: \"kubernetes.io/projected/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-kube-api-access-z2lmn\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832023 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29mt\" (UniqueName: \"kubernetes.io/projected/728a2e60-915e-4447-9465-aa64f7f5c7bb-kube-api-access-r29mt\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832068 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-config\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832086 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data-custom\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832106 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832135 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832151 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832171 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-httpd-config\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.832185 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwc6\" (UniqueName: \"kubernetes.io/projected/84203bc9-afb4-42cb-843d-c211490ce275-kube-api-access-6jwc6\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.833843 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.833856 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.834368 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.834633 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.835136 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-config\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.836133 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-794bfbdd44-9msr6"] Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.853265 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2lmn\" (UniqueName: \"kubernetes.io/projected/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-kube-api-access-z2lmn\") pod \"dnsmasq-dns-7bdf86f46f-zmls6\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.904442 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935112 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-combined-ca-bundle\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935152 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data-custom\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935170 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-combined-ca-bundle\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935191 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-config\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935213 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-public-tls-certs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935231 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-combined-ca-bundle\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935259 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935290 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-combined-ca-bundle\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935311 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-scripts\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935326 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7729e2d8-6c8c-4759-9e5d-535ad1586f47-logs\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935341 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2e60-915e-4447-9465-aa64f7f5c7bb-logs\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935359 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l264t\" (UniqueName: \"kubernetes.io/projected/c3dfec36-0758-42c6-8c28-997044eb59a3-kube-api-access-l264t\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935377 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpv7q\" (UniqueName: \"kubernetes.io/projected/7729e2d8-6c8c-4759-9e5d-535ad1586f47-kube-api-access-vpv7q\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935393 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-combined-ca-bundle\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935413 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935462 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3dfec36-0758-42c6-8c28-997044eb59a3-logs\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935482 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r29mt\" (UniqueName: \"kubernetes.io/projected/728a2e60-915e-4447-9465-aa64f7f5c7bb-kube-api-access-r29mt\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935497 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebe4f743-9a60-428f-8b58-14ba160d9fd7-logs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935516 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68vcv\" (UniqueName: \"kubernetes.io/projected/ebe4f743-9a60-428f-8b58-14ba160d9fd7-kube-api-access-68vcv\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935557 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data-custom\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935578 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935619 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-httpd-config\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935633 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwc6\" (UniqueName: \"kubernetes.io/projected/84203bc9-afb4-42cb-843d-c211490ce275-kube-api-access-6jwc6\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935662 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-config-data\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935680 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-internal-tls-certs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935700 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data-custom\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.935722 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-ovndb-tls-certs\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.943692 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-combined-ca-bundle\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.945146 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7729e2d8-6c8c-4759-9e5d-535ad1586f47-logs\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.945682 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2e60-915e-4447-9465-aa64f7f5c7bb-logs\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.951123 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-combined-ca-bundle\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.952720 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-ovndb-tls-certs\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.953212 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data-custom\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.953282 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-combined-ca-bundle\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.953869 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-httpd-config\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.954341 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data-custom\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.955624 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-config\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.959495 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.971832 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwc6\" (UniqueName: \"kubernetes.io/projected/84203bc9-afb4-42cb-843d-c211490ce275-kube-api-access-6jwc6\") pod \"neutron-687c697484-j2btt\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.991083 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpv7q\" (UniqueName: \"kubernetes.io/projected/7729e2d8-6c8c-4759-9e5d-535ad1586f47-kube-api-access-vpv7q\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:19 crc kubenswrapper[4931]: I0130 05:26:19.991567 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data\") pod \"barbican-worker-7c996f77-c9rqm\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.014922 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r29mt\" (UniqueName: \"kubernetes.io/projected/728a2e60-915e-4447-9465-aa64f7f5c7bb-kube-api-access-r29mt\") pod \"barbican-keystone-listener-5f5d456c6b-66jxb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037513 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-scripts\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037565 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l264t\" (UniqueName: \"kubernetes.io/projected/c3dfec36-0758-42c6-8c28-997044eb59a3-kube-api-access-l264t\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037583 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-combined-ca-bundle\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037613 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037649 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3dfec36-0758-42c6-8c28-997044eb59a3-logs\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037667 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebe4f743-9a60-428f-8b58-14ba160d9fd7-logs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037685 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68vcv\" (UniqueName: \"kubernetes.io/projected/ebe4f743-9a60-428f-8b58-14ba160d9fd7-kube-api-access-68vcv\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037783 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-config-data\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037800 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-internal-tls-certs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037819 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data-custom\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037876 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-public-tls-certs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.037891 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-combined-ca-bundle\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.044756 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebe4f743-9a60-428f-8b58-14ba160d9fd7-logs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.045211 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3dfec36-0758-42c6-8c28-997044eb59a3-logs\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.048694 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-scripts\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.049359 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-combined-ca-bundle\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.064003 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data-custom\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.066049 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-public-tls-certs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.073782 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-internal-tls-certs\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.073873 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-config-data\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.076829 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-combined-ca-bundle\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.077941 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68vcv\" (UniqueName: \"kubernetes.io/projected/ebe4f743-9a60-428f-8b58-14ba160d9fd7-kube-api-access-68vcv\") pod \"placement-798b7dc5fb-xl2zq\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.079025 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l264t\" (UniqueName: \"kubernetes.io/projected/c3dfec36-0758-42c6-8c28-997044eb59a3-kube-api-access-l264t\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.079953 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-c7z8q"] Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.083469 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data\") pod \"barbican-api-794bfbdd44-9msr6\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.216947 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.227146 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.240965 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.252371 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.278342 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9d68b44b-5gp25"] Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.301856 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.400872 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f7d589966-mkfs5"] Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.439938 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-67465d5765-cp74w"] Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.554813 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6665f9d796-74mbd"] Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.582868 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-97bdbd495-2prdt"] Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.583126 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67465d5765-cp74w" event={"ID":"a83e0ea3-83ba-4e7c-803c-4fd9811318a2","Type":"ContainerStarted","Data":"9f1458d6f86849c7d56580c53cae53507cdf0fec4d72928952c134f8ba2a7ca8"} Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.593274 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9d68b44b-5gp25" event={"ID":"b92991ff-5b79-452a-b5ac-9dc90ab42f68","Type":"ContainerStarted","Data":"040b81795acd0bef7c76b7a99d650deaac66b5fa82f97baf669121be56928797"} Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.596601 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" event={"ID":"807d8709-a403-4186-83f5-ec76aee793fe","Type":"ContainerStarted","Data":"fc0c653d3e574db62881709b302919c961837f9a8fc28421f26c150c1cbda477"} Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.596765 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-zmls6"] Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.601994 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" event={"ID":"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224","Type":"ContainerStarted","Data":"af773d7e6d6c3024589870daad5e39942c3e37e8d1998e13765a6119ce565675"} Jan 30 05:26:20 crc kubenswrapper[4931]: I0130 05:26:20.680737 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc49c789d-5gcj4"] Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.128874 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f5d456c6b-66jxb"] Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.254548 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-687c697484-j2btt"] Jan 30 05:26:21 crc kubenswrapper[4931]: W0130 05:26:21.278268 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84203bc9_afb4_42cb_843d_c211490ce275.slice/crio-ae04b2bb5e94306ea64985be9c8f6deb0c75c6536be8008e597d586d1aee985e WatchSource:0}: Error finding container ae04b2bb5e94306ea64985be9c8f6deb0c75c6536be8008e597d586d1aee985e: Status 404 returned error can't find the container with id ae04b2bb5e94306ea64985be9c8f6deb0c75c6536be8008e597d586d1aee985e Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.312261 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-798b7dc5fb-xl2zq"] Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.495647 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7c996f77-c9rqm"] Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.495972 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-794bfbdd44-9msr6"] Jan 30 05:26:21 crc kubenswrapper[4931]: W0130 05:26:21.521879 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7729e2d8_6c8c_4759_9e5d_535ad1586f47.slice/crio-ac16bb78f3ca5ff67b0b11f3773806ecd75b6cd0b938e4013f99b8b4e7b2e044 WatchSource:0}: Error finding container ac16bb78f3ca5ff67b0b11f3773806ecd75b6cd0b938e4013f99b8b4e7b2e044: Status 404 returned error can't find the container with id ac16bb78f3ca5ff67b0b11f3773806ecd75b6cd0b938e4013f99b8b4e7b2e044 Jan 30 05:26:21 crc kubenswrapper[4931]: W0130 05:26:21.528556 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3dfec36_0758_42c6_8c28_997044eb59a3.slice/crio-d2345973c91074357738f420a2e030c74216ac2f1de71e5a31e295982e276e96 WatchSource:0}: Error finding container d2345973c91074357738f420a2e030c74216ac2f1de71e5a31e295982e276e96: Status 404 returned error can't find the container with id d2345973c91074357738f420a2e030c74216ac2f1de71e5a31e295982e276e96 Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.671976 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97bdbd495-2prdt" event={"ID":"2d6e5156-6e75-4dff-a322-b3d43e596c7e","Type":"ContainerStarted","Data":"2031f531f783ff9fda1aa19098c42b8b6619a54760d8a1056a1788a1c38b669b"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.672019 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97bdbd495-2prdt" event={"ID":"2d6e5156-6e75-4dff-a322-b3d43e596c7e","Type":"ContainerStarted","Data":"c2c40320b6d71850a7db7d062b86807a450e4758cd147671abdfe8fd00c2df62"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.674622 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.686854 4931 generic.go:334] "Generic (PLEG): container finished" podID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerID="42238edb208312fe89370f3d6e71cdbdaaf5a688762779ee0068776a658a91e9" exitCode=0 Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.686979 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" event={"ID":"4aa89fd3-2a8a-424c-b3a7-cf743d90a249","Type":"ContainerDied","Data":"42238edb208312fe89370f3d6e71cdbdaaf5a688762779ee0068776a658a91e9"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.687016 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" event={"ID":"4aa89fd3-2a8a-424c-b3a7-cf743d90a249","Type":"ContainerStarted","Data":"b7351472d16e045cf1d352d57e3502d62cfe0a1c627e0387d4154e9570e9d7c6"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.688380 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.688482 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.693535 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798b7dc5fb-xl2zq" event={"ID":"ebe4f743-9a60-428f-8b58-14ba160d9fd7","Type":"ContainerStarted","Data":"7f97af972b1577947f7f7edff42a3df45ac3d6eddfca2ad04dcbcbf60edeb902"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.703623 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c996f77-c9rqm" event={"ID":"7729e2d8-6c8c-4759-9e5d-535ad1586f47","Type":"ContainerStarted","Data":"ac16bb78f3ca5ff67b0b11f3773806ecd75b6cd0b938e4013f99b8b4e7b2e044"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.711396 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-97bdbd495-2prdt" podStartSLOduration=3.7113781169999998 podStartE2EDuration="3.711378117s" podCreationTimestamp="2026-01-30 05:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:21.699162797 +0000 UTC m=+1117.069073054" watchObservedRunningTime="2026-01-30 05:26:21.711378117 +0000 UTC m=+1117.081288374" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.725813 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6665f9d796-74mbd" event={"ID":"28e3fd91-5906-4368-b156-e0d60f3c268e","Type":"ContainerStarted","Data":"e29747a73e8f9c7e78f65f5f5c5542788fec252768ae4f62a8f1f67f3d4ca4ed"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.725880 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6665f9d796-74mbd" event={"ID":"28e3fd91-5906-4368-b156-e0d60f3c268e","Type":"ContainerStarted","Data":"fc46b2c7ab19d04106c22b57ea741652e3c63fe169a85fea4405836701cbe7c6"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.725894 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6665f9d796-74mbd" event={"ID":"28e3fd91-5906-4368-b156-e0d60f3c268e","Type":"ContainerStarted","Data":"b479366de18a258ddf192480628e9708d18845ae58ecac6569bfbb633a96f682"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.727983 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.728017 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.742694 4931 generic.go:334] "Generic (PLEG): container finished" podID="6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" containerID="9c3c4b6151a0c51d59294c812422f37d6d21632c4b81b84ecc9451a3cae1e0d5" exitCode=0 Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.742751 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" event={"ID":"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224","Type":"ContainerDied","Data":"9c3c4b6151a0c51d59294c812422f37d6d21632c4b81b84ecc9451a3cae1e0d5"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.754240 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" event={"ID":"728a2e60-915e-4447-9465-aa64f7f5c7bb","Type":"ContainerStarted","Data":"cd9a53b66398f13fcc5edf6801d39072217390bf6fb5b5264a9e5d24f429383b"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.762857 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9d68b44b-5gp25" event={"ID":"b92991ff-5b79-452a-b5ac-9dc90ab42f68","Type":"ContainerStarted","Data":"133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.768663 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794bfbdd44-9msr6" event={"ID":"c3dfec36-0758-42c6-8c28-997044eb59a3","Type":"ContainerStarted","Data":"d2345973c91074357738f420a2e030c74216ac2f1de71e5a31e295982e276e96"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.801002 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6665f9d796-74mbd" podStartSLOduration=2.800987688 podStartE2EDuration="2.800987688s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:21.770907151 +0000 UTC m=+1117.140817438" watchObservedRunningTime="2026-01-30 05:26:21.800987688 +0000 UTC m=+1117.170897945" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.802500 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c697484-j2btt" event={"ID":"84203bc9-afb4-42cb-843d-c211490ce275","Type":"ContainerStarted","Data":"ae04b2bb5e94306ea64985be9c8f6deb0c75c6536be8008e597d586d1aee985e"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.835685 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc49c789d-5gcj4" event={"ID":"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec","Type":"ContainerStarted","Data":"e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.835728 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc49c789d-5gcj4" event={"ID":"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec","Type":"ContainerStarted","Data":"f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.835736 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc49c789d-5gcj4" event={"ID":"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec","Type":"ContainerStarted","Data":"49b94c209fcd846b366cb60120c52ee63d74a76288f62e76634d76df2ff577f1"} Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.836845 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:21 crc kubenswrapper[4931]: I0130 05:26:21.863488 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dc49c789d-5gcj4" podStartSLOduration=2.863461923 podStartE2EDuration="2.863461923s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:21.853478332 +0000 UTC m=+1117.223388589" watchObservedRunningTime="2026-01-30 05:26:21.863461923 +0000 UTC m=+1117.233372180" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.106790 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.215194 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.295914 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-swift-storage-0\") pod \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.296061 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-svc\") pod \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.296104 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-nb\") pod \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.296136 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q72vp\" (UniqueName: \"kubernetes.io/projected/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-kube-api-access-q72vp\") pod \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.296204 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-sb\") pod \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.296253 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-config\") pod \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\" (UID: \"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224\") " Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.335429 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-kube-api-access-q72vp" (OuterVolumeSpecName: "kube-api-access-q72vp") pod "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" (UID: "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224"). InnerVolumeSpecName "kube-api-access-q72vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.403343 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q72vp\" (UniqueName: \"kubernetes.io/projected/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-kube-api-access-q72vp\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.528360 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" (UID: "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.536602 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-config" (OuterVolumeSpecName: "config") pod "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" (UID: "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.556944 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" (UID: "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.557078 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" (UID: "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.573554 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" (UID: "6d840d6e-c5f2-4b2e-9dc1-1b6df0950224"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.610708 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.610749 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.610762 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.610775 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.610787 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.864600 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c697484-j2btt" event={"ID":"84203bc9-afb4-42cb-843d-c211490ce275","Type":"ContainerStarted","Data":"14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.864643 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c697484-j2btt" event={"ID":"84203bc9-afb4-42cb-843d-c211490ce275","Type":"ContainerStarted","Data":"56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.865736 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.887727 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" event={"ID":"6d840d6e-c5f2-4b2e-9dc1-1b6df0950224","Type":"ContainerDied","Data":"af773d7e6d6c3024589870daad5e39942c3e37e8d1998e13765a6119ce565675"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.887780 4931 scope.go:117] "RemoveContainer" containerID="9c3c4b6151a0c51d59294c812422f37d6d21632c4b81b84ecc9451a3cae1e0d5" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.887901 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-c7z8q" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.895720 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-687c697484-j2btt" podStartSLOduration=3.895704927 podStartE2EDuration="3.895704927s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:22.880737621 +0000 UTC m=+1118.250647878" watchObservedRunningTime="2026-01-30 05:26:22.895704927 +0000 UTC m=+1118.265615184" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.904749 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9d68b44b-5gp25" event={"ID":"b92991ff-5b79-452a-b5ac-9dc90ab42f68","Type":"ContainerStarted","Data":"fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.906337 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.906363 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.923261 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794bfbdd44-9msr6" event={"ID":"c3dfec36-0758-42c6-8c28-997044eb59a3","Type":"ContainerStarted","Data":"7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.923313 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794bfbdd44-9msr6" event={"ID":"c3dfec36-0758-42c6-8c28-997044eb59a3","Type":"ContainerStarted","Data":"845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.923359 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.923372 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.941032 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d9d68b44b-5gp25" podStartSLOduration=3.941008478 podStartE2EDuration="3.941008478s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:22.931177862 +0000 UTC m=+1118.301088119" watchObservedRunningTime="2026-01-30 05:26:22.941008478 +0000 UTC m=+1118.310918735" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.951343 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" event={"ID":"4aa89fd3-2a8a-424c-b3a7-cf743d90a249","Type":"ContainerStarted","Data":"d07be3298c7001c20f2b88e58ea237b62170a8ccd4a24bfb286b3d0a2bff7807"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.952149 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.955943 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798b7dc5fb-xl2zq" event={"ID":"ebe4f743-9a60-428f-8b58-14ba160d9fd7","Type":"ContainerStarted","Data":"e8519c60ec437acc9c9b5934ab3951ad5ad349186eda26fc85c2bae9b3010203"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.955983 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798b7dc5fb-xl2zq" event={"ID":"ebe4f743-9a60-428f-8b58-14ba160d9fd7","Type":"ContainerStarted","Data":"1d733edb3ceaca43f34355e23bbaaced9e55a731057ead7b89c96398337d6e11"} Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.956587 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.956615 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:22 crc kubenswrapper[4931]: I0130 05:26:22.981302 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-794bfbdd44-9msr6" podStartSLOduration=3.981283362 podStartE2EDuration="3.981283362s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:22.964599912 +0000 UTC m=+1118.334510169" watchObservedRunningTime="2026-01-30 05:26:22.981283362 +0000 UTC m=+1118.351193619" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.092983 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-c7z8q"] Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.102272 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-c7z8q"] Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.119760 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-798b7dc5fb-xl2zq" podStartSLOduration=4.119717143 podStartE2EDuration="4.119717143s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:23.066164045 +0000 UTC m=+1118.436074302" watchObservedRunningTime="2026-01-30 05:26:23.119717143 +0000 UTC m=+1118.489627400" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.136640 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" podStartSLOduration=4.136626949 podStartE2EDuration="4.136626949s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:23.09295982 +0000 UTC m=+1118.462870077" watchObservedRunningTime="2026-01-30 05:26:23.136626949 +0000 UTC m=+1118.506537206" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.212876 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-687c697484-j2btt"] Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.239271 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75d9f6f6ff-kmswn"] Jan 30 05:26:23 crc kubenswrapper[4931]: E0130 05:26:23.239714 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" containerName="init" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.239731 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" containerName="init" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.239937 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" containerName="init" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.240888 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.244457 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.246187 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.249878 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75d9f6f6ff-kmswn"] Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.328357 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-public-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.328414 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-internal-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.328467 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-httpd-config\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.328823 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-ovndb-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.328889 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c78ch\" (UniqueName: \"kubernetes.io/projected/e1f9790c-c395-4c72-b569-3140f703b56f-kube-api-access-c78ch\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.328917 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-config\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.329074 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-combined-ca-bundle\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.431030 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-combined-ca-bundle\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.431137 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-public-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.431171 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-internal-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.431199 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-httpd-config\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.431242 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-ovndb-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.431266 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c78ch\" (UniqueName: \"kubernetes.io/projected/e1f9790c-c395-4c72-b569-3140f703b56f-kube-api-access-c78ch\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.431280 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-config\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.442448 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d840d6e-c5f2-4b2e-9dc1-1b6df0950224" path="/var/lib/kubelet/pods/6d840d6e-c5f2-4b2e-9dc1-1b6df0950224/volumes" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.458225 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-ovndb-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.459547 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-config\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.460146 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-internal-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.476383 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-combined-ca-bundle\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.476888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-public-tls-certs\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.477343 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-httpd-config\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.479539 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c78ch\" (UniqueName: \"kubernetes.io/projected/e1f9790c-c395-4c72-b569-3140f703b56f-kube-api-access-c78ch\") pod \"neutron-75d9f6f6ff-kmswn\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:23 crc kubenswrapper[4931]: I0130 05:26:23.563019 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:24 crc kubenswrapper[4931]: I0130 05:26:24.703037 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 05:26:24 crc kubenswrapper[4931]: I0130 05:26:24.934949 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 05:26:25 crc kubenswrapper[4931]: I0130 05:26:25.007354 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-687c697484-j2btt" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-api" containerID="cri-o://56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e" gracePeriod=30 Jan 30 05:26:25 crc kubenswrapper[4931]: I0130 05:26:25.007783 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-687c697484-j2btt" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-httpd" containerID="cri-o://14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625" gracePeriod=30 Jan 30 05:26:25 crc kubenswrapper[4931]: W0130 05:26:25.459336 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1f9790c_c395_4c72_b569_3140f703b56f.slice/crio-7ca85a404546ccf8741ecd606d0270826591549f7f7979a41ae78e99d2986a63 WatchSource:0}: Error finding container 7ca85a404546ccf8741ecd606d0270826591549f7f7979a41ae78e99d2986a63: Status 404 returned error can't find the container with id 7ca85a404546ccf8741ecd606d0270826591549f7f7979a41ae78e99d2986a63 Jan 30 05:26:25 crc kubenswrapper[4931]: I0130 05:26:25.464569 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75d9f6f6ff-kmswn"] Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.064474 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" event={"ID":"728a2e60-915e-4447-9465-aa64f7f5c7bb","Type":"ContainerStarted","Data":"1bd0c14353cbfd196f658cae7f7167624a1cc818a0ca23ec5151f1c871a22e65"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.064735 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" event={"ID":"728a2e60-915e-4447-9465-aa64f7f5c7bb","Type":"ContainerStarted","Data":"f0f483180c30bc672edc2e00c840d52567eb9b0c61f8c285d3a7c2a185f38020"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.098357 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" podStartSLOduration=3.397344378 podStartE2EDuration="7.098342226s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="2026-01-30 05:26:21.170584727 +0000 UTC m=+1116.540494984" lastFinishedPulling="2026-01-30 05:26:24.871582585 +0000 UTC m=+1120.241492832" observedRunningTime="2026-01-30 05:26:26.095858349 +0000 UTC m=+1121.465768626" watchObservedRunningTime="2026-01-30 05:26:26.098342226 +0000 UTC m=+1121.468252483" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.119659 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67465d5765-cp74w" event={"ID":"a83e0ea3-83ba-4e7c-803c-4fd9811318a2","Type":"ContainerStarted","Data":"e33db123fcf3634be4f056a5ccfd14e3aaf930151b4fce1b9c71c79348a5ff96"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.119703 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67465d5765-cp74w" event={"ID":"a83e0ea3-83ba-4e7c-803c-4fd9811318a2","Type":"ContainerStarted","Data":"fcd84ab06f79a15cca51ab919a2d8f4365b9ca38a548c71882f816752c64d1a8"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.147932 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-f7d589966-mkfs5"] Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.184053 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6665f9d796-74mbd"] Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.184290 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6665f9d796-74mbd" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api-log" containerID="cri-o://fc46b2c7ab19d04106c22b57ea741652e3c63fe169a85fea4405836701cbe7c6" gracePeriod=30 Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.184584 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-67465d5765-cp74w" podStartSLOduration=3.866018903 podStartE2EDuration="8.184562741s" podCreationTimestamp="2026-01-30 05:26:18 +0000 UTC" firstStartedPulling="2026-01-30 05:26:20.554946866 +0000 UTC m=+1115.924857123" lastFinishedPulling="2026-01-30 05:26:24.873490704 +0000 UTC m=+1120.243400961" observedRunningTime="2026-01-30 05:26:26.166116947 +0000 UTC m=+1121.536027194" watchObservedRunningTime="2026-01-30 05:26:26.184562741 +0000 UTC m=+1121.554472998" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.186720 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6665f9d796-74mbd" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api" containerID="cri-o://e29747a73e8f9c7e78f65f5f5c5542788fec252768ae4f62a8f1f67f3d4ca4ed" gracePeriod=30 Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.189751 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d9f6f6ff-kmswn" event={"ID":"e1f9790c-c395-4c72-b569-3140f703b56f","Type":"ContainerStarted","Data":"59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.189783 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d9f6f6ff-kmswn" event={"ID":"e1f9790c-c395-4c72-b569-3140f703b56f","Type":"ContainerStarted","Data":"7ca85a404546ccf8741ecd606d0270826591549f7f7979a41ae78e99d2986a63"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.198660 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6665f9d796-74mbd" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": EOF" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.219037 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" event={"ID":"807d8709-a403-4186-83f5-ec76aee793fe","Type":"ContainerStarted","Data":"4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.219077 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" event={"ID":"807d8709-a403-4186-83f5-ec76aee793fe","Type":"ContainerStarted","Data":"74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.220075 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d69b6c966-npv8t"] Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.227813 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.233152 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.233732 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.237296 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c996f77-c9rqm" event={"ID":"7729e2d8-6c8c-4759-9e5d-535ad1586f47","Type":"ContainerStarted","Data":"2c58ff417f0ff91cddebc47f633febd5ac50ac3ddd97dcc2fce28574c94ac8a6"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.237398 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c996f77-c9rqm" event={"ID":"7729e2d8-6c8c-4759-9e5d-535ad1586f47","Type":"ContainerStarted","Data":"4ec5d987a513f3f04cf30f8d242bd5ee734a2387c5a279b070c18b72f4a56519"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.247344 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d69b6c966-npv8t"] Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.255122 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" podStartSLOduration=3.938538331 podStartE2EDuration="8.255107998s" podCreationTimestamp="2026-01-30 05:26:18 +0000 UTC" firstStartedPulling="2026-01-30 05:26:20.554694698 +0000 UTC m=+1115.924604955" lastFinishedPulling="2026-01-30 05:26:24.871264365 +0000 UTC m=+1120.241174622" observedRunningTime="2026-01-30 05:26:26.254471748 +0000 UTC m=+1121.624382015" watchObservedRunningTime="2026-01-30 05:26:26.255107998 +0000 UTC m=+1121.625018255" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.261274 4931 generic.go:334] "Generic (PLEG): container finished" podID="84203bc9-afb4-42cb-843d-c211490ce275" containerID="14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625" exitCode=0 Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.261315 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c697484-j2btt" event={"ID":"84203bc9-afb4-42cb-843d-c211490ce275","Type":"ContainerDied","Data":"14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625"} Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.314668 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-combined-ca-bundle\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.314730 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-public-tls-certs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.314752 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-internal-tls-certs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.314860 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99zt\" (UniqueName: \"kubernetes.io/projected/58928fea-709c-44d8-bd12-23937da8e2c4-kube-api-access-x99zt\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.314900 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data-custom\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.314935 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58928fea-709c-44d8-bd12-23937da8e2c4-logs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.314966 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.317088 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7c996f77-c9rqm" podStartSLOduration=3.973442108 podStartE2EDuration="7.317069787s" podCreationTimestamp="2026-01-30 05:26:19 +0000 UTC" firstStartedPulling="2026-01-30 05:26:21.528614896 +0000 UTC m=+1116.898525153" lastFinishedPulling="2026-01-30 05:26:24.872242575 +0000 UTC m=+1120.242152832" observedRunningTime="2026-01-30 05:26:26.296395973 +0000 UTC m=+1121.666306230" watchObservedRunningTime="2026-01-30 05:26:26.317069787 +0000 UTC m=+1121.686980044" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.353857 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67465d5765-cp74w"] Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.417552 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x99zt\" (UniqueName: \"kubernetes.io/projected/58928fea-709c-44d8-bd12-23937da8e2c4-kube-api-access-x99zt\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.417607 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data-custom\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.417642 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58928fea-709c-44d8-bd12-23937da8e2c4-logs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.417674 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.417715 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-combined-ca-bundle\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.417742 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-public-tls-certs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.417760 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-internal-tls-certs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.418279 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58928fea-709c-44d8-bd12-23937da8e2c4-logs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.424160 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-internal-tls-certs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.425156 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-combined-ca-bundle\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.437284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.439945 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-public-tls-certs\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.440195 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data-custom\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.449456 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x99zt\" (UniqueName: \"kubernetes.io/projected/58928fea-709c-44d8-bd12-23937da8e2c4-kube-api-access-x99zt\") pod \"barbican-api-7d69b6c966-npv8t\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:26 crc kubenswrapper[4931]: I0130 05:26:26.557215 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.055710 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d69b6c966-npv8t"] Jan 30 05:26:27 crc kubenswrapper[4931]: W0130 05:26:27.067700 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58928fea_709c_44d8_bd12_23937da8e2c4.slice/crio-80b7562d2e28920f91efc6005a3dada9547915b81544850dc4480d2be479f27a WatchSource:0}: Error finding container 80b7562d2e28920f91efc6005a3dada9547915b81544850dc4480d2be479f27a: Status 404 returned error can't find the container with id 80b7562d2e28920f91efc6005a3dada9547915b81544850dc4480d2be479f27a Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.273559 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d69b6c966-npv8t" event={"ID":"58928fea-709c-44d8-bd12-23937da8e2c4","Type":"ContainerStarted","Data":"80b7562d2e28920f91efc6005a3dada9547915b81544850dc4480d2be479f27a"} Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.276976 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d9f6f6ff-kmswn" event={"ID":"e1f9790c-c395-4c72-b569-3140f703b56f","Type":"ContainerStarted","Data":"e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e"} Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.277865 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.280143 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rpr97" event={"ID":"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6","Type":"ContainerStarted","Data":"1e41b0f0000283bd1a29c28f4d8fdb74fdd5389e3ebd8804eac1db1375b10248"} Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.282386 4931 generic.go:334] "Generic (PLEG): container finished" podID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerID="fc46b2c7ab19d04106c22b57ea741652e3c63fe169a85fea4405836701cbe7c6" exitCode=143 Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.282960 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6665f9d796-74mbd" event={"ID":"28e3fd91-5906-4368-b156-e0d60f3c268e","Type":"ContainerDied","Data":"fc46b2c7ab19d04106c22b57ea741652e3c63fe169a85fea4405836701cbe7c6"} Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.284031 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener-log" containerID="cri-o://74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6" gracePeriod=30 Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.284263 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener" containerID="cri-o://4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0" gracePeriod=30 Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.304789 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75d9f6f6ff-kmswn" podStartSLOduration=4.304774473 podStartE2EDuration="4.304774473s" podCreationTimestamp="2026-01-30 05:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:27.300618984 +0000 UTC m=+1122.670529241" watchObservedRunningTime="2026-01-30 05:26:27.304774473 +0000 UTC m=+1122.674684730" Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.334870 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rpr97" podStartSLOduration=3.844829641 podStartE2EDuration="40.33485332s" podCreationTimestamp="2026-01-30 05:25:47 +0000 UTC" firstStartedPulling="2026-01-30 05:25:48.742369501 +0000 UTC m=+1084.112279758" lastFinishedPulling="2026-01-30 05:26:25.23239318 +0000 UTC m=+1120.602303437" observedRunningTime="2026-01-30 05:26:27.334384805 +0000 UTC m=+1122.704295072" watchObservedRunningTime="2026-01-30 05:26:27.33485332 +0000 UTC m=+1122.704763577" Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.366870 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:26:27 crc kubenswrapper[4931]: I0130 05:26:27.366920 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.097402 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.295322 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d69b6c966-npv8t" event={"ID":"58928fea-709c-44d8-bd12-23937da8e2c4","Type":"ContainerStarted","Data":"44392d9ac535d9a3ce2ca47aa88e680823c3197a2e50d537aa67df4b03e52fd1"} Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.295373 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d69b6c966-npv8t" event={"ID":"58928fea-709c-44d8-bd12-23937da8e2c4","Type":"ContainerStarted","Data":"0e0a199cc977b5213010336cc2b6c461a3916b61b6c3d9f6dc8eecc7d5c8d17e"} Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.295412 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.295593 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.306352 4931 generic.go:334] "Generic (PLEG): container finished" podID="807d8709-a403-4186-83f5-ec76aee793fe" containerID="74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6" exitCode=143 Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.306686 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" event={"ID":"807d8709-a403-4186-83f5-ec76aee793fe","Type":"ContainerDied","Data":"74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6"} Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.306923 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67465d5765-cp74w" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker-log" containerID="cri-o://fcd84ab06f79a15cca51ab919a2d8f4365b9ca38a548c71882f816752c64d1a8" gracePeriod=30 Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.307028 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-67465d5765-cp74w" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker" containerID="cri-o://e33db123fcf3634be4f056a5ccfd14e3aaf930151b4fce1b9c71c79348a5ff96" gracePeriod=30 Jan 30 05:26:28 crc kubenswrapper[4931]: I0130 05:26:28.331030 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d69b6c966-npv8t" podStartSLOduration=2.3310119 podStartE2EDuration="2.3310119s" podCreationTimestamp="2026-01-30 05:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:28.320303457 +0000 UTC m=+1123.690213714" watchObservedRunningTime="2026-01-30 05:26:28.3310119 +0000 UTC m=+1123.700922157" Jan 30 05:26:29 crc kubenswrapper[4931]: I0130 05:26:29.324611 4931 generic.go:334] "Generic (PLEG): container finished" podID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerID="e33db123fcf3634be4f056a5ccfd14e3aaf930151b4fce1b9c71c79348a5ff96" exitCode=0 Jan 30 05:26:29 crc kubenswrapper[4931]: I0130 05:26:29.325024 4931 generic.go:334] "Generic (PLEG): container finished" podID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerID="fcd84ab06f79a15cca51ab919a2d8f4365b9ca38a548c71882f816752c64d1a8" exitCode=143 Jan 30 05:26:29 crc kubenswrapper[4931]: I0130 05:26:29.324682 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67465d5765-cp74w" event={"ID":"a83e0ea3-83ba-4e7c-803c-4fd9811318a2","Type":"ContainerDied","Data":"e33db123fcf3634be4f056a5ccfd14e3aaf930151b4fce1b9c71c79348a5ff96"} Jan 30 05:26:29 crc kubenswrapper[4931]: I0130 05:26:29.325328 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67465d5765-cp74w" event={"ID":"a83e0ea3-83ba-4e7c-803c-4fd9811318a2","Type":"ContainerDied","Data":"fcd84ab06f79a15cca51ab919a2d8f4365b9ca38a548c71882f816752c64d1a8"} Jan 30 05:26:29 crc kubenswrapper[4931]: I0130 05:26:29.907292 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:29 crc kubenswrapper[4931]: I0130 05:26:29.972889 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-6rw7f"] Jan 30 05:26:29 crc kubenswrapper[4931]: I0130 05:26:29.973116 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" podUID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerName="dnsmasq-dns" containerID="cri-o://00f6a2dd44878296bf4733164be83ce28b802aa8c1f8943860365c668511c527" gracePeriod=10 Jan 30 05:26:30 crc kubenswrapper[4931]: I0130 05:26:30.340692 4931 generic.go:334] "Generic (PLEG): container finished" podID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerID="00f6a2dd44878296bf4733164be83ce28b802aa8c1f8943860365c668511c527" exitCode=0 Jan 30 05:26:30 crc kubenswrapper[4931]: I0130 05:26:30.341239 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" event={"ID":"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089","Type":"ContainerDied","Data":"00f6a2dd44878296bf4733164be83ce28b802aa8c1f8943860365c668511c527"} Jan 30 05:26:31 crc kubenswrapper[4931]: I0130 05:26:31.588774 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6665f9d796-74mbd" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:41292->10.217.0.157:9311: read: connection reset by peer" Jan 30 05:26:31 crc kubenswrapper[4931]: I0130 05:26:31.588811 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6665f9d796-74mbd" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:41288->10.217.0.157:9311: read: connection reset by peer" Jan 30 05:26:31 crc kubenswrapper[4931]: I0130 05:26:31.779120 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:31 crc kubenswrapper[4931]: I0130 05:26:31.846206 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.387774 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" event={"ID":"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089","Type":"ContainerDied","Data":"ae9e2c438382358c90343f1970628f2e8ea67a2dacc48b9e3c93a331cd67467e"} Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.387822 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae9e2c438382358c90343f1970628f2e8ea67a2dacc48b9e3c93a331cd67467e" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.395711 4931 generic.go:334] "Generic (PLEG): container finished" podID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerID="e29747a73e8f9c7e78f65f5f5c5542788fec252768ae4f62a8f1f67f3d4ca4ed" exitCode=0 Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.395792 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6665f9d796-74mbd" event={"ID":"28e3fd91-5906-4368-b156-e0d60f3c268e","Type":"ContainerDied","Data":"e29747a73e8f9c7e78f65f5f5c5542788fec252768ae4f62a8f1f67f3d4ca4ed"} Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.447487 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.569864 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5mcg\" (UniqueName: \"kubernetes.io/projected/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-kube-api-access-s5mcg\") pod \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.569934 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-config\") pod \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.569954 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-sb\") pod \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.569994 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-swift-storage-0\") pod \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.570017 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-nb\") pod \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.570035 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-svc\") pod \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\" (UID: \"10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089\") " Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.593411 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-kube-api-access-s5mcg" (OuterVolumeSpecName: "kube-api-access-s5mcg") pod "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" (UID: "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089"). InnerVolumeSpecName "kube-api-access-s5mcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.641568 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" (UID: "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.663675 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-config" (OuterVolumeSpecName: "config") pod "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" (UID: "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.671949 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.671972 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.671983 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5mcg\" (UniqueName: \"kubernetes.io/projected/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-kube-api-access-s5mcg\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.689002 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" (UID: "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.751082 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" (UID: "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.758126 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" (UID: "10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.774004 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.774032 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:32 crc kubenswrapper[4931]: I0130 05:26:32.774050 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.058231 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.082179 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.184592 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data\") pod \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.184636 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-combined-ca-bundle\") pod \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.184678 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vl6j\" (UniqueName: \"kubernetes.io/projected/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-kube-api-access-8vl6j\") pod \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.184718 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data\") pod \"28e3fd91-5906-4368-b156-e0d60f3c268e\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.184740 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-logs\") pod \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.185216 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e3fd91-5906-4368-b156-e0d60f3c268e-logs\") pod \"28e3fd91-5906-4368-b156-e0d60f3c268e\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.185325 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9lwj\" (UniqueName: \"kubernetes.io/projected/28e3fd91-5906-4368-b156-e0d60f3c268e-kube-api-access-f9lwj\") pod \"28e3fd91-5906-4368-b156-e0d60f3c268e\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.185349 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data-custom\") pod \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\" (UID: \"a83e0ea3-83ba-4e7c-803c-4fd9811318a2\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.185373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data-custom\") pod \"28e3fd91-5906-4368-b156-e0d60f3c268e\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.185397 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-combined-ca-bundle\") pod \"28e3fd91-5906-4368-b156-e0d60f3c268e\" (UID: \"28e3fd91-5906-4368-b156-e0d60f3c268e\") " Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.185899 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-logs" (OuterVolumeSpecName: "logs") pod "a83e0ea3-83ba-4e7c-803c-4fd9811318a2" (UID: "a83e0ea3-83ba-4e7c-803c-4fd9811318a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.186039 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28e3fd91-5906-4368-b156-e0d60f3c268e-logs" (OuterVolumeSpecName: "logs") pod "28e3fd91-5906-4368-b156-e0d60f3c268e" (UID: "28e3fd91-5906-4368-b156-e0d60f3c268e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.189143 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28e3fd91-5906-4368-b156-e0d60f3c268e" (UID: "28e3fd91-5906-4368-b156-e0d60f3c268e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.196843 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a83e0ea3-83ba-4e7c-803c-4fd9811318a2" (UID: "a83e0ea3-83ba-4e7c-803c-4fd9811318a2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.196864 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-kube-api-access-8vl6j" (OuterVolumeSpecName: "kube-api-access-8vl6j") pod "a83e0ea3-83ba-4e7c-803c-4fd9811318a2" (UID: "a83e0ea3-83ba-4e7c-803c-4fd9811318a2"). InnerVolumeSpecName "kube-api-access-8vl6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.200711 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e3fd91-5906-4368-b156-e0d60f3c268e-kube-api-access-f9lwj" (OuterVolumeSpecName: "kube-api-access-f9lwj") pod "28e3fd91-5906-4368-b156-e0d60f3c268e" (UID: "28e3fd91-5906-4368-b156-e0d60f3c268e"). InnerVolumeSpecName "kube-api-access-f9lwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.210641 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a83e0ea3-83ba-4e7c-803c-4fd9811318a2" (UID: "a83e0ea3-83ba-4e7c-803c-4fd9811318a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.214124 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28e3fd91-5906-4368-b156-e0d60f3c268e" (UID: "28e3fd91-5906-4368-b156-e0d60f3c268e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.233909 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data" (OuterVolumeSpecName: "config-data") pod "a83e0ea3-83ba-4e7c-803c-4fd9811318a2" (UID: "a83e0ea3-83ba-4e7c-803c-4fd9811318a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.234160 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data" (OuterVolumeSpecName: "config-data") pod "28e3fd91-5906-4368-b156-e0d60f3c268e" (UID: "28e3fd91-5906-4368-b156-e0d60f3c268e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287674 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e3fd91-5906-4368-b156-e0d60f3c268e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287706 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9lwj\" (UniqueName: \"kubernetes.io/projected/28e3fd91-5906-4368-b156-e0d60f3c268e-kube-api-access-f9lwj\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287720 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287730 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287738 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287746 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287754 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287762 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vl6j\" (UniqueName: \"kubernetes.io/projected/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-kube-api-access-8vl6j\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287770 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e3fd91-5906-4368-b156-e0d60f3c268e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.287778 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e0ea3-83ba-4e7c-803c-4fd9811318a2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:33 crc kubenswrapper[4931]: E0130 05:26:33.313671 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.406442 4931 generic.go:334] "Generic (PLEG): container finished" podID="6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" containerID="1e41b0f0000283bd1a29c28f4d8fdb74fdd5389e3ebd8804eac1db1375b10248" exitCode=0 Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.406510 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rpr97" event={"ID":"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6","Type":"ContainerDied","Data":"1e41b0f0000283bd1a29c28f4d8fdb74fdd5389e3ebd8804eac1db1375b10248"} Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.410219 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerStarted","Data":"21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031"} Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.410277 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="ceilometer-notification-agent" containerID="cri-o://ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418" gracePeriod=30 Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.410307 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.410347 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="proxy-httpd" containerID="cri-o://21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031" gracePeriod=30 Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.410447 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="sg-core" containerID="cri-o://c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547" gracePeriod=30 Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.415137 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6665f9d796-74mbd" event={"ID":"28e3fd91-5906-4368-b156-e0d60f3c268e","Type":"ContainerDied","Data":"b479366de18a258ddf192480628e9708d18845ae58ecac6569bfbb633a96f682"} Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.415165 4931 scope.go:117] "RemoveContainer" containerID="e29747a73e8f9c7e78f65f5f5c5542788fec252768ae4f62a8f1f67f3d4ca4ed" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.415242 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6665f9d796-74mbd" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.420631 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-6rw7f" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.421248 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-67465d5765-cp74w" event={"ID":"a83e0ea3-83ba-4e7c-803c-4fd9811318a2","Type":"ContainerDied","Data":"9f1458d6f86849c7d56580c53cae53507cdf0fec4d72928952c134f8ba2a7ca8"} Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.421326 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-67465d5765-cp74w" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.475446 4931 scope.go:117] "RemoveContainer" containerID="fc46b2c7ab19d04106c22b57ea741652e3c63fe169a85fea4405836701cbe7c6" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.505162 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6665f9d796-74mbd"] Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.505821 4931 scope.go:117] "RemoveContainer" containerID="e33db123fcf3634be4f056a5ccfd14e3aaf930151b4fce1b9c71c79348a5ff96" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.543254 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6665f9d796-74mbd"] Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.555965 4931 scope.go:117] "RemoveContainer" containerID="fcd84ab06f79a15cca51ab919a2d8f4365b9ca38a548c71882f816752c64d1a8" Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.559019 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-67465d5765-cp74w"] Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.570293 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-67465d5765-cp74w"] Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.604087 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-6rw7f"] Jan 30 05:26:33 crc kubenswrapper[4931]: I0130 05:26:33.621699 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-6rw7f"] Jan 30 05:26:33 crc kubenswrapper[4931]: E0130 05:26:33.671936 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbaca0_0d1f_47cd_bb8e_8e09e4a65307.slice/crio-c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda83e0ea3_83ba_4e7c_803c_4fd9811318a2.slice/crio-9f1458d6f86849c7d56580c53cae53507cdf0fec4d72928952c134f8ba2a7ca8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda83e0ea3_83ba_4e7c_803c_4fd9811318a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28e3fd91_5906_4368_b156_e0d60f3c268e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebbaca0_0d1f_47cd_bb8e_8e09e4a65307.slice/crio-conmon-c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28e3fd91_5906_4368_b156_e0d60f3c268e.slice/crio-b479366de18a258ddf192480628e9708d18845ae58ecac6569bfbb633a96f682\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10aa7dbb_a9c9_4f2e_8ae5_ec39da4fb089.slice/crio-ae9e2c438382358c90343f1970628f2e8ea67a2dacc48b9e3c93a331cd67467e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10aa7dbb_a9c9_4f2e_8ae5_ec39da4fb089.slice\": RecentStats: unable to find data in memory cache]" Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.434655 4931 generic.go:334] "Generic (PLEG): container finished" podID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerID="21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031" exitCode=0 Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.435099 4931 generic.go:334] "Generic (PLEG): container finished" podID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerID="c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547" exitCode=2 Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.435192 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerDied","Data":"21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031"} Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.435243 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerDied","Data":"c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547"} Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.887059 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rpr97" Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.918802 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-combined-ca-bundle\") pod \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.918983 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-config-data\") pod \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.919027 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-db-sync-config-data\") pod \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.919071 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6jwz\" (UniqueName: \"kubernetes.io/projected/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-kube-api-access-x6jwz\") pod \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.919187 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-scripts\") pod \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.919318 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-etc-machine-id\") pod \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\" (UID: \"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6\") " Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.920084 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" (UID: "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.927714 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-scripts" (OuterVolumeSpecName: "scripts") pod "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" (UID: "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.929276 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" (UID: "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.937659 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-kube-api-access-x6jwz" (OuterVolumeSpecName: "kube-api-access-x6jwz") pod "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" (UID: "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6"). InnerVolumeSpecName "kube-api-access-x6jwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:34 crc kubenswrapper[4931]: I0130 05:26:34.971229 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" (UID: "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.001046 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-config-data" (OuterVolumeSpecName: "config-data") pod "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" (UID: "6dd6723b-baf8-47eb-a774-68a5dfbcc4a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.021578 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.021605 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.021614 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.021622 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.021631 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.021639 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6jwz\" (UniqueName: \"kubernetes.io/projected/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6-kube-api-access-x6jwz\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.439558 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" path="/var/lib/kubelet/pods/10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089/volumes" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.440847 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" path="/var/lib/kubelet/pods/28e3fd91-5906-4368-b156-e0d60f3c268e/volumes" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.442359 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" path="/var/lib/kubelet/pods/a83e0ea3-83ba-4e7c-803c-4fd9811318a2/volumes" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.451521 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rpr97" event={"ID":"6dd6723b-baf8-47eb-a774-68a5dfbcc4a6","Type":"ContainerDied","Data":"f838ce1f11e506679d678bae95342cc3dcecec78b2114b17644603c407ad3619"} Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.451594 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f838ce1f11e506679d678bae95342cc3dcecec78b2114b17644603c407ad3619" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.451689 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rpr97" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832048 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-vxxmk"] Jan 30 05:26:35 crc kubenswrapper[4931]: E0130 05:26:35.832391 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832408 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api" Jan 30 05:26:35 crc kubenswrapper[4931]: E0130 05:26:35.832444 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerName="init" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832451 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerName="init" Jan 30 05:26:35 crc kubenswrapper[4931]: E0130 05:26:35.832464 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" containerName="cinder-db-sync" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832474 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" containerName="cinder-db-sync" Jan 30 05:26:35 crc kubenswrapper[4931]: E0130 05:26:35.832483 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker-log" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832490 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker-log" Jan 30 05:26:35 crc kubenswrapper[4931]: E0130 05:26:35.832504 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api-log" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832510 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api-log" Jan 30 05:26:35 crc kubenswrapper[4931]: E0130 05:26:35.832522 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832529 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker" Jan 30 05:26:35 crc kubenswrapper[4931]: E0130 05:26:35.832539 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerName="dnsmasq-dns" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832546 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerName="dnsmasq-dns" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832707 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api-log" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832723 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832735 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="10aa7dbb-a9c9-4f2e-8ae5-ec39da4fb089" containerName="dnsmasq-dns" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832749 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" containerName="cinder-db-sync" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832757 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e3fd91-5906-4368-b156-e0d60f3c268e" containerName="barbican-api" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.832764 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83e0ea3-83ba-4e7c-803c-4fd9811318a2" containerName="barbican-worker-log" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.833625 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.853963 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-vxxmk"] Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.867221 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.872647 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.876455 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kv6bp" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.876560 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.876671 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.876748 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.888035 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.943864 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-config\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.943912 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682nm\" (UniqueName: \"kubernetes.io/projected/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-kube-api-access-682nm\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.943945 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.943983 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944001 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944040 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944058 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944075 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944098 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944117 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkqp\" (UniqueName: \"kubernetes.io/projected/1f487872-4003-4559-8f72-1c6022321160-kube-api-access-2pkqp\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944145 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.944200 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f487872-4003-4559-8f72-1c6022321160-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.995597 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:26:35 crc kubenswrapper[4931]: I0130 05:26:35.997476 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.000016 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.010184 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.048851 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-config\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.048891 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-682nm\" (UniqueName: \"kubernetes.io/projected/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-kube-api-access-682nm\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.048928 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.048968 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.048989 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049008 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049048 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049066 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049084 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049102 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049124 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049146 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkqp\" (UniqueName: \"kubernetes.io/projected/1f487872-4003-4559-8f72-1c6022321160-kube-api-access-2pkqp\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049177 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049218 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9bv\" (UniqueName: \"kubernetes.io/projected/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-kube-api-access-5m9bv\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049238 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-scripts\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049264 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049283 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049600 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-logs\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.049950 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.050012 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f487872-4003-4559-8f72-1c6022321160-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.050089 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f487872-4003-4559-8f72-1c6022321160-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.050500 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.051019 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.051778 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-config\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.055946 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.056635 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.057553 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.059946 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.061880 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-scripts\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.077735 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkqp\" (UniqueName: \"kubernetes.io/projected/1f487872-4003-4559-8f72-1c6022321160-kube-api-access-2pkqp\") pod \"cinder-scheduler-0\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.082864 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-682nm\" (UniqueName: \"kubernetes.io/projected/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-kube-api-access-682nm\") pod \"dnsmasq-dns-75bfc9b94f-vxxmk\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.154652 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.154739 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.154793 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m9bv\" (UniqueName: \"kubernetes.io/projected/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-kube-api-access-5m9bv\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.154813 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-scripts\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.154837 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.154856 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.154886 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-logs\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.155482 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-logs\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.156869 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.165696 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.166308 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.166960 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.173006 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-scripts\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.182927 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m9bv\" (UniqueName: \"kubernetes.io/projected/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-kube-api-access-5m9bv\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.200115 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.207201 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.344443 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.608703 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.731173 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-vxxmk"] Jan 30 05:26:36 crc kubenswrapper[4931]: W0130 05:26:36.988706 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6bc53b_31f7_4650_aab3_d4bcf8b685ab.slice/crio-9ea7e55f3db83940154f2b0bcb0d4ef000b7a1cdefa0062385e86b6b76cab2c2 WatchSource:0}: Error finding container 9ea7e55f3db83940154f2b0bcb0d4ef000b7a1cdefa0062385e86b6b76cab2c2: Status 404 returned error can't find the container with id 9ea7e55f3db83940154f2b0bcb0d4ef000b7a1cdefa0062385e86b6b76cab2c2 Jan 30 05:26:36 crc kubenswrapper[4931]: I0130 05:26:36.991215 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.474683 4931 generic.go:334] "Generic (PLEG): container finished" podID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerID="096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd" exitCode=0 Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.475346 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" event={"ID":"98fe74d3-fa52-4814-8497-1a9bb9ea72ed","Type":"ContainerDied","Data":"096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd"} Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.475382 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" event={"ID":"98fe74d3-fa52-4814-8497-1a9bb9ea72ed","Type":"ContainerStarted","Data":"e1641f306bf07b5142c6dd94dd4d7be821af4a934007d916dd0dd69749c5f578"} Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.476628 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af6bc53b-31f7-4650-aab3-d4bcf8b685ab","Type":"ContainerStarted","Data":"9ea7e55f3db83940154f2b0bcb0d4ef000b7a1cdefa0062385e86b6b76cab2c2"} Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.479984 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f487872-4003-4559-8f72-1c6022321160","Type":"ContainerStarted","Data":"16910d816a28eef85f00dcbaeae0524c9893ffdb8537d1cf664654c8a4d009f8"} Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.844142 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.906753 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.987815 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-log-httpd\") pod \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.987860 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl9x6\" (UniqueName: \"kubernetes.io/projected/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-kube-api-access-nl9x6\") pod \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.988030 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-combined-ca-bundle\") pod \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.988072 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-sg-core-conf-yaml\") pod \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.988143 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-scripts\") pod \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.988182 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-config-data\") pod \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.988239 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-run-httpd\") pod \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\" (UID: \"debbaca0-0d1f-47cd-bb8e-8e09e4a65307\") " Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.989157 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "debbaca0-0d1f-47cd-bb8e-8e09e4a65307" (UID: "debbaca0-0d1f-47cd-bb8e-8e09e4a65307"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.989873 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "debbaca0-0d1f-47cd-bb8e-8e09e4a65307" (UID: "debbaca0-0d1f-47cd-bb8e-8e09e4a65307"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.994698 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-kube-api-access-nl9x6" (OuterVolumeSpecName: "kube-api-access-nl9x6") pod "debbaca0-0d1f-47cd-bb8e-8e09e4a65307" (UID: "debbaca0-0d1f-47cd-bb8e-8e09e4a65307"). InnerVolumeSpecName "kube-api-access-nl9x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:37 crc kubenswrapper[4931]: I0130 05:26:37.995084 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-scripts" (OuterVolumeSpecName: "scripts") pod "debbaca0-0d1f-47cd-bb8e-8e09e4a65307" (UID: "debbaca0-0d1f-47cd-bb8e-8e09e4a65307"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.031665 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "debbaca0-0d1f-47cd-bb8e-8e09e4a65307" (UID: "debbaca0-0d1f-47cd-bb8e-8e09e4a65307"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.076084 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-config-data" (OuterVolumeSpecName: "config-data") pod "debbaca0-0d1f-47cd-bb8e-8e09e4a65307" (UID: "debbaca0-0d1f-47cd-bb8e-8e09e4a65307"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.077231 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "debbaca0-0d1f-47cd-bb8e-8e09e4a65307" (UID: "debbaca0-0d1f-47cd-bb8e-8e09e4a65307"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.093583 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.093614 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.093626 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.093635 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.093646 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.093655 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.093663 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl9x6\" (UniqueName: \"kubernetes.io/projected/debbaca0-0d1f-47cd-bb8e-8e09e4a65307-kube-api-access-nl9x6\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.421709 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.453133 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.500126 4931 generic.go:334] "Generic (PLEG): container finished" podID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerID="ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418" exitCode=0 Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.500190 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerDied","Data":"ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418"} Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.500216 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"debbaca0-0d1f-47cd-bb8e-8e09e4a65307","Type":"ContainerDied","Data":"c0144289ab3513de686db41a01bd60e595d46a3f8bcaea66b48e5c2753f90feb"} Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.500234 4931 scope.go:117] "RemoveContainer" containerID="21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.500355 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.518918 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" event={"ID":"98fe74d3-fa52-4814-8497-1a9bb9ea72ed","Type":"ContainerStarted","Data":"1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f"} Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.518990 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.526617 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-794bfbdd44-9msr6"] Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.526896 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-794bfbdd44-9msr6" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api-log" containerID="cri-o://845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5" gracePeriod=30 Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.526929 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-794bfbdd44-9msr6" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api" containerID="cri-o://7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819" gracePeriod=30 Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.532699 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api-log" containerID="cri-o://2a8b28eadcd454ca8adf3b36ea9153ce1d0f727ce7e3b65bd14a0471ebbea32f" gracePeriod=30 Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.532777 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af6bc53b-31f7-4650-aab3-d4bcf8b685ab","Type":"ContainerStarted","Data":"6b99333c2447cf347ded741a740d78480c65d8cae2c155c1e36dafd5c5578db3"} Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.532800 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af6bc53b-31f7-4650-aab3-d4bcf8b685ab","Type":"ContainerStarted","Data":"2a8b28eadcd454ca8adf3b36ea9153ce1d0f727ce7e3b65bd14a0471ebbea32f"} Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.532818 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.532841 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api" containerID="cri-o://6b99333c2447cf347ded741a740d78480c65d8cae2c155c1e36dafd5c5578db3" gracePeriod=30 Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.544711 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f487872-4003-4559-8f72-1c6022321160","Type":"ContainerStarted","Data":"1ee4814b304dc2facdec4fe5a7ec548e21648d29d6f5eb9a0d58da2eecb4e24b"} Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.559656 4931 scope.go:117] "RemoveContainer" containerID="c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.565914 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" podStartSLOduration=3.565893691 podStartE2EDuration="3.565893691s" podCreationTimestamp="2026-01-30 05:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:38.547280601 +0000 UTC m=+1133.917190878" watchObservedRunningTime="2026-01-30 05:26:38.565893691 +0000 UTC m=+1133.935803948" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.615070 4931 scope.go:117] "RemoveContainer" containerID="ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.620126 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.620107549 podStartE2EDuration="3.620107549s" podCreationTimestamp="2026-01-30 05:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:38.580864787 +0000 UTC m=+1133.950775044" watchObservedRunningTime="2026-01-30 05:26:38.620107549 +0000 UTC m=+1133.990017806" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.652504 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.671615 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.718414 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:26:38 crc kubenswrapper[4931]: E0130 05:26:38.725166 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="proxy-httpd" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.725195 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="proxy-httpd" Jan 30 05:26:38 crc kubenswrapper[4931]: E0130 05:26:38.725219 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="sg-core" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.725226 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="sg-core" Jan 30 05:26:38 crc kubenswrapper[4931]: E0130 05:26:38.725258 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="ceilometer-notification-agent" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.725265 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="ceilometer-notification-agent" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.725701 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="sg-core" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.725725 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="ceilometer-notification-agent" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.725749 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" containerName="proxy-httpd" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.730661 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.730788 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.732700 4931 scope.go:117] "RemoveContainer" containerID="21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031" Jan 30 05:26:38 crc kubenswrapper[4931]: E0130 05:26:38.737089 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031\": container with ID starting with 21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031 not found: ID does not exist" containerID="21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.737133 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031"} err="failed to get container status \"21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031\": rpc error: code = NotFound desc = could not find container \"21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031\": container with ID starting with 21b691b2e09482e45017a7d0d0da5ee40918b81eaf090f346eedfe083ceae031 not found: ID does not exist" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.737159 4931 scope.go:117] "RemoveContainer" containerID="c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.738922 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.739049 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 05:26:38 crc kubenswrapper[4931]: E0130 05:26:38.739496 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547\": container with ID starting with c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547 not found: ID does not exist" containerID="c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.739573 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547"} err="failed to get container status \"c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547\": rpc error: code = NotFound desc = could not find container \"c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547\": container with ID starting with c463c463e45ea10f9d2515585c1aaf9faf895d02d8c1374d44f662c63e4c8547 not found: ID does not exist" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.739624 4931 scope.go:117] "RemoveContainer" containerID="ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418" Jan 30 05:26:38 crc kubenswrapper[4931]: E0130 05:26:38.745087 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418\": container with ID starting with ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418 not found: ID does not exist" containerID="ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.745120 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418"} err="failed to get container status \"ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418\": rpc error: code = NotFound desc = could not find container \"ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418\": container with ID starting with ad927df37265a1a2b98e1c5c50856998727c6049b8e51b41498d7e63a4b86418 not found: ID does not exist" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.810873 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-config-data\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.811135 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-run-httpd\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.811180 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-scripts\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.811195 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.811241 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-log-httpd\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.811262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.811297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2j42\" (UniqueName: \"kubernetes.io/projected/e5755369-fc75-443e-b608-996b7212ac94-kube-api-access-x2j42\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.913467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-config-data\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.913537 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-run-httpd\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.913581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-scripts\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.913597 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.913652 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-log-httpd\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.913681 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.913727 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2j42\" (UniqueName: \"kubernetes.io/projected/e5755369-fc75-443e-b608-996b7212ac94-kube-api-access-x2j42\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.915650 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-run-httpd\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.917978 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-log-httpd\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.921232 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-scripts\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.922389 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-config-data\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.925826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.929271 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:38 crc kubenswrapper[4931]: I0130 05:26:38.934634 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2j42\" (UniqueName: \"kubernetes.io/projected/e5755369-fc75-443e-b608-996b7212ac94-kube-api-access-x2j42\") pod \"ceilometer-0\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " pod="openstack/ceilometer-0" Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.063707 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.430948 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debbaca0-0d1f-47cd-bb8e-8e09e4a65307" path="/var/lib/kubelet/pods/debbaca0-0d1f-47cd-bb8e-8e09e4a65307/volumes" Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.532163 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:26:39 crc kubenswrapper[4931]: W0130 05:26:39.538863 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice/crio-ea5ec3936bd62e44ba566c0aec793fe2ef89fc6023d7cad6b0242228b7b8d07a WatchSource:0}: Error finding container ea5ec3936bd62e44ba566c0aec793fe2ef89fc6023d7cad6b0242228b7b8d07a: Status 404 returned error can't find the container with id ea5ec3936bd62e44ba566c0aec793fe2ef89fc6023d7cad6b0242228b7b8d07a Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.556598 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerStarted","Data":"ea5ec3936bd62e44ba566c0aec793fe2ef89fc6023d7cad6b0242228b7b8d07a"} Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.564369 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerID="845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5" exitCode=143 Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.564454 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794bfbdd44-9msr6" event={"ID":"c3dfec36-0758-42c6-8c28-997044eb59a3","Type":"ContainerDied","Data":"845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5"} Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.568802 4931 generic.go:334] "Generic (PLEG): container finished" podID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerID="2a8b28eadcd454ca8adf3b36ea9153ce1d0f727ce7e3b65bd14a0471ebbea32f" exitCode=143 Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.568878 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af6bc53b-31f7-4650-aab3-d4bcf8b685ab","Type":"ContainerDied","Data":"2a8b28eadcd454ca8adf3b36ea9153ce1d0f727ce7e3b65bd14a0471ebbea32f"} Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.574779 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f487872-4003-4559-8f72-1c6022321160","Type":"ContainerStarted","Data":"136922e5f994f3ec703ee0b76647539238ec38bef505c93a23f26dc8f73ef24d"} Jan 30 05:26:39 crc kubenswrapper[4931]: I0130 05:26:39.593678 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.72480247 podStartE2EDuration="4.593663766s" podCreationTimestamp="2026-01-30 05:26:35 +0000 UTC" firstStartedPulling="2026-01-30 05:26:36.64675765 +0000 UTC m=+1132.016667907" lastFinishedPulling="2026-01-30 05:26:37.515618946 +0000 UTC m=+1132.885529203" observedRunningTime="2026-01-30 05:26:39.591518039 +0000 UTC m=+1134.961428326" watchObservedRunningTime="2026-01-30 05:26:39.593663766 +0000 UTC m=+1134.963574023" Jan 30 05:26:40 crc kubenswrapper[4931]: I0130 05:26:40.595552 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerStarted","Data":"08e9531045b4ed348a8e15f9e06ca988f965f1f605cc60da77be0b95272ee581"} Jan 30 05:26:41 crc kubenswrapper[4931]: I0130 05:26:41.201300 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 05:26:41 crc kubenswrapper[4931]: I0130 05:26:41.608700 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerStarted","Data":"b2d093b072c9cdec044442a6734371a3413de8a6cea48d10abb06780f1cd4e75"} Jan 30 05:26:41 crc kubenswrapper[4931]: I0130 05:26:41.608737 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerStarted","Data":"f4f164366b57c885cd381de9f7095a47cecfe7af0c8f7b404360ca7dbbfa150d"} Jan 30 05:26:41 crc kubenswrapper[4931]: I0130 05:26:41.687603 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-794bfbdd44-9msr6" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:45332->10.217.0.164:9311: read: connection reset by peer" Jan 30 05:26:41 crc kubenswrapper[4931]: I0130 05:26:41.687960 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-794bfbdd44-9msr6" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:45318->10.217.0.164:9311: read: connection reset by peer" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.344144 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.412407 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data-custom\") pod \"c3dfec36-0758-42c6-8c28-997044eb59a3\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.412506 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3dfec36-0758-42c6-8c28-997044eb59a3-logs\") pod \"c3dfec36-0758-42c6-8c28-997044eb59a3\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.412641 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l264t\" (UniqueName: \"kubernetes.io/projected/c3dfec36-0758-42c6-8c28-997044eb59a3-kube-api-access-l264t\") pod \"c3dfec36-0758-42c6-8c28-997044eb59a3\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.412705 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-combined-ca-bundle\") pod \"c3dfec36-0758-42c6-8c28-997044eb59a3\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.412786 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data\") pod \"c3dfec36-0758-42c6-8c28-997044eb59a3\" (UID: \"c3dfec36-0758-42c6-8c28-997044eb59a3\") " Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.414117 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3dfec36-0758-42c6-8c28-997044eb59a3-logs" (OuterVolumeSpecName: "logs") pod "c3dfec36-0758-42c6-8c28-997044eb59a3" (UID: "c3dfec36-0758-42c6-8c28-997044eb59a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.419677 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c3dfec36-0758-42c6-8c28-997044eb59a3" (UID: "c3dfec36-0758-42c6-8c28-997044eb59a3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.424478 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3dfec36-0758-42c6-8c28-997044eb59a3-kube-api-access-l264t" (OuterVolumeSpecName: "kube-api-access-l264t") pod "c3dfec36-0758-42c6-8c28-997044eb59a3" (UID: "c3dfec36-0758-42c6-8c28-997044eb59a3"). InnerVolumeSpecName "kube-api-access-l264t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.448446 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3dfec36-0758-42c6-8c28-997044eb59a3" (UID: "c3dfec36-0758-42c6-8c28-997044eb59a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.503750 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data" (OuterVolumeSpecName: "config-data") pod "c3dfec36-0758-42c6-8c28-997044eb59a3" (UID: "c3dfec36-0758-42c6-8c28-997044eb59a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.515788 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3dfec36-0758-42c6-8c28-997044eb59a3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.515839 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l264t\" (UniqueName: \"kubernetes.io/projected/c3dfec36-0758-42c6-8c28-997044eb59a3-kube-api-access-l264t\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.515861 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.515879 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.515897 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c3dfec36-0758-42c6-8c28-997044eb59a3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.622376 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerID="7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819" exitCode=0 Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.622471 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794bfbdd44-9msr6" event={"ID":"c3dfec36-0758-42c6-8c28-997044eb59a3","Type":"ContainerDied","Data":"7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819"} Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.622521 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-794bfbdd44-9msr6" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.622542 4931 scope.go:117] "RemoveContainer" containerID="7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.622528 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-794bfbdd44-9msr6" event={"ID":"c3dfec36-0758-42c6-8c28-997044eb59a3","Type":"ContainerDied","Data":"d2345973c91074357738f420a2e030c74216ac2f1de71e5a31e295982e276e96"} Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.659208 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-794bfbdd44-9msr6"] Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.662199 4931 scope.go:117] "RemoveContainer" containerID="845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.668113 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-794bfbdd44-9msr6"] Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.687099 4931 scope.go:117] "RemoveContainer" containerID="7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819" Jan 30 05:26:42 crc kubenswrapper[4931]: E0130 05:26:42.687683 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819\": container with ID starting with 7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819 not found: ID does not exist" containerID="7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.687733 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819"} err="failed to get container status \"7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819\": rpc error: code = NotFound desc = could not find container \"7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819\": container with ID starting with 7b6be02bbcc8d0b1586d305237296d73b7b3f419c5f0464691a475c292d19819 not found: ID does not exist" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.687761 4931 scope.go:117] "RemoveContainer" containerID="845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5" Jan 30 05:26:42 crc kubenswrapper[4931]: E0130 05:26:42.688133 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5\": container with ID starting with 845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5 not found: ID does not exist" containerID="845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5" Jan 30 05:26:42 crc kubenswrapper[4931]: I0130 05:26:42.688194 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5"} err="failed to get container status \"845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5\": rpc error: code = NotFound desc = could not find container \"845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5\": container with ID starting with 845309e636acb01f517162d220ba0171bf46701cbbd559935997c52d2058a8d5 not found: ID does not exist" Jan 30 05:26:43 crc kubenswrapper[4931]: I0130 05:26:43.463297 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" path="/var/lib/kubelet/pods/c3dfec36-0758-42c6-8c28-997044eb59a3/volumes" Jan 30 05:26:43 crc kubenswrapper[4931]: I0130 05:26:43.641462 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 05:26:43 crc kubenswrapper[4931]: I0130 05:26:43.669167 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.857656306 podStartE2EDuration="5.669153435s" podCreationTimestamp="2026-01-30 05:26:38 +0000 UTC" firstStartedPulling="2026-01-30 05:26:39.541335516 +0000 UTC m=+1134.911245773" lastFinishedPulling="2026-01-30 05:26:43.352832625 +0000 UTC m=+1138.722742902" observedRunningTime="2026-01-30 05:26:43.66834564 +0000 UTC m=+1139.038255927" watchObservedRunningTime="2026-01-30 05:26:43.669153435 +0000 UTC m=+1139.039063692" Jan 30 05:26:44 crc kubenswrapper[4931]: I0130 05:26:44.659781 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerStarted","Data":"9e8bac5316b66891bb65008c7a82aba4d5b92fb6001c63ba8c49a06e95040b7f"} Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.167705 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.261810 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-zmls6"] Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.262077 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" podUID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerName="dnsmasq-dns" containerID="cri-o://d07be3298c7001c20f2b88e58ea237b62170a8ccd4a24bfb286b3d0a2bff7807" gracePeriod=10 Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.466371 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.515152 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.682130 4931 generic.go:334] "Generic (PLEG): container finished" podID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerID="d07be3298c7001c20f2b88e58ea237b62170a8ccd4a24bfb286b3d0a2bff7807" exitCode=0 Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.682529 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="cinder-scheduler" containerID="cri-o://1ee4814b304dc2facdec4fe5a7ec548e21648d29d6f5eb9a0d58da2eecb4e24b" gracePeriod=30 Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.682608 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="probe" containerID="cri-o://136922e5f994f3ec703ee0b76647539238ec38bef505c93a23f26dc8f73ef24d" gracePeriod=30 Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.682620 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" event={"ID":"4aa89fd3-2a8a-424c-b3a7-cf743d90a249","Type":"ContainerDied","Data":"d07be3298c7001c20f2b88e58ea237b62170a8ccd4a24bfb286b3d0a2bff7807"} Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.807072 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.922777 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-sb\") pod \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.922844 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-config\") pod \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.922919 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-swift-storage-0\") pod \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.923054 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2lmn\" (UniqueName: \"kubernetes.io/projected/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-kube-api-access-z2lmn\") pod \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.923077 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-nb\") pod \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.923101 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-svc\") pod \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\" (UID: \"4aa89fd3-2a8a-424c-b3a7-cf743d90a249\") " Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.930172 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-kube-api-access-z2lmn" (OuterVolumeSpecName: "kube-api-access-z2lmn") pod "4aa89fd3-2a8a-424c-b3a7-cf743d90a249" (UID: "4aa89fd3-2a8a-424c-b3a7-cf743d90a249"). InnerVolumeSpecName "kube-api-access-z2lmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.980933 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4aa89fd3-2a8a-424c-b3a7-cf743d90a249" (UID: "4aa89fd3-2a8a-424c-b3a7-cf743d90a249"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:46 crc kubenswrapper[4931]: I0130 05:26:46.985636 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4aa89fd3-2a8a-424c-b3a7-cf743d90a249" (UID: "4aa89fd3-2a8a-424c-b3a7-cf743d90a249"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.002162 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4aa89fd3-2a8a-424c-b3a7-cf743d90a249" (UID: "4aa89fd3-2a8a-424c-b3a7-cf743d90a249"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.005875 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4aa89fd3-2a8a-424c-b3a7-cf743d90a249" (UID: "4aa89fd3-2a8a-424c-b3a7-cf743d90a249"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.010265 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-config" (OuterVolumeSpecName: "config") pod "4aa89fd3-2a8a-424c-b3a7-cf743d90a249" (UID: "4aa89fd3-2a8a-424c-b3a7-cf743d90a249"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.025746 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.025937 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.026045 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.026134 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.026387 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.026475 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2lmn\" (UniqueName: \"kubernetes.io/projected/4aa89fd3-2a8a-424c-b3a7-cf743d90a249-kube-api-access-z2lmn\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.696119 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" event={"ID":"4aa89fd3-2a8a-424c-b3a7-cf743d90a249","Type":"ContainerDied","Data":"b7351472d16e045cf1d352d57e3502d62cfe0a1c627e0387d4154e9570e9d7c6"} Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.697545 4931 scope.go:117] "RemoveContainer" containerID="d07be3298c7001c20f2b88e58ea237b62170a8ccd4a24bfb286b3d0a2bff7807" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.696444 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-zmls6" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.701342 4931 generic.go:334] "Generic (PLEG): container finished" podID="1f487872-4003-4559-8f72-1c6022321160" containerID="136922e5f994f3ec703ee0b76647539238ec38bef505c93a23f26dc8f73ef24d" exitCode=0 Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.701490 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f487872-4003-4559-8f72-1c6022321160","Type":"ContainerDied","Data":"136922e5f994f3ec703ee0b76647539238ec38bef505c93a23f26dc8f73ef24d"} Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.726673 4931 scope.go:117] "RemoveContainer" containerID="42238edb208312fe89370f3d6e71cdbdaaf5a688762779ee0068776a658a91e9" Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.731462 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-zmls6"] Jan 30 05:26:47 crc kubenswrapper[4931]: I0130 05:26:47.741528 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-zmls6"] Jan 30 05:26:48 crc kubenswrapper[4931]: I0130 05:26:48.562718 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.435794 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" path="/var/lib/kubelet/pods/4aa89fd3-2a8a-424c-b3a7-cf743d90a249/volumes" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.727128 4931 generic.go:334] "Generic (PLEG): container finished" podID="1f487872-4003-4559-8f72-1c6022321160" containerID="1ee4814b304dc2facdec4fe5a7ec548e21648d29d6f5eb9a0d58da2eecb4e24b" exitCode=0 Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.727170 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f487872-4003-4559-8f72-1c6022321160","Type":"ContainerDied","Data":"1ee4814b304dc2facdec4fe5a7ec548e21648d29d6f5eb9a0d58da2eecb4e24b"} Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.727195 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1f487872-4003-4559-8f72-1c6022321160","Type":"ContainerDied","Data":"16910d816a28eef85f00dcbaeae0524c9893ffdb8537d1cf664654c8a4d009f8"} Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.727205 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16910d816a28eef85f00dcbaeae0524c9893ffdb8537d1cf664654c8a4d009f8" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.736237 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.763393 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878092 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f487872-4003-4559-8f72-1c6022321160-etc-machine-id\") pod \"1f487872-4003-4559-8f72-1c6022321160\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878220 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pkqp\" (UniqueName: \"kubernetes.io/projected/1f487872-4003-4559-8f72-1c6022321160-kube-api-access-2pkqp\") pod \"1f487872-4003-4559-8f72-1c6022321160\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878258 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-scripts\") pod \"1f487872-4003-4559-8f72-1c6022321160\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878298 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data\") pod \"1f487872-4003-4559-8f72-1c6022321160\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878315 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-combined-ca-bundle\") pod \"1f487872-4003-4559-8f72-1c6022321160\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878398 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data-custom\") pod \"1f487872-4003-4559-8f72-1c6022321160\" (UID: \"1f487872-4003-4559-8f72-1c6022321160\") " Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878207 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f487872-4003-4559-8f72-1c6022321160-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f487872-4003-4559-8f72-1c6022321160" (UID: "1f487872-4003-4559-8f72-1c6022321160"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.878788 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f487872-4003-4559-8f72-1c6022321160-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.888700 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f487872-4003-4559-8f72-1c6022321160" (UID: "1f487872-4003-4559-8f72-1c6022321160"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.888848 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f487872-4003-4559-8f72-1c6022321160-kube-api-access-2pkqp" (OuterVolumeSpecName: "kube-api-access-2pkqp") pod "1f487872-4003-4559-8f72-1c6022321160" (UID: "1f487872-4003-4559-8f72-1c6022321160"). InnerVolumeSpecName "kube-api-access-2pkqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.899098 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-scripts" (OuterVolumeSpecName: "scripts") pod "1f487872-4003-4559-8f72-1c6022321160" (UID: "1f487872-4003-4559-8f72-1c6022321160"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.947232 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f487872-4003-4559-8f72-1c6022321160" (UID: "1f487872-4003-4559-8f72-1c6022321160"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.981396 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.981433 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.981444 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:49 crc kubenswrapper[4931]: I0130 05:26:49.981454 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pkqp\" (UniqueName: \"kubernetes.io/projected/1f487872-4003-4559-8f72-1c6022321160-kube-api-access-2pkqp\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.049203 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data" (OuterVolumeSpecName: "config-data") pod "1f487872-4003-4559-8f72-1c6022321160" (UID: "1f487872-4003-4559-8f72-1c6022321160"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.082577 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f487872-4003-4559-8f72-1c6022321160-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.244173 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-687c697484-j2btt" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.162:9696/\": dial tcp 10.217.0.162:9696: connect: connection refused" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.736687 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.784000 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.791751 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.804151 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.816682 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:50 crc kubenswrapper[4931]: E0130 05:26:50.817000 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817016 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api" Jan 30 05:26:50 crc kubenswrapper[4931]: E0130 05:26:50.817029 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerName="dnsmasq-dns" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817035 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerName="dnsmasq-dns" Jan 30 05:26:50 crc kubenswrapper[4931]: E0130 05:26:50.817049 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="probe" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817055 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="probe" Jan 30 05:26:50 crc kubenswrapper[4931]: E0130 05:26:50.817077 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="cinder-scheduler" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817082 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="cinder-scheduler" Jan 30 05:26:50 crc kubenswrapper[4931]: E0130 05:26:50.817093 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api-log" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817098 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api-log" Jan 30 05:26:50 crc kubenswrapper[4931]: E0130 05:26:50.817113 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerName="init" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817119 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerName="init" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817272 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa89fd3-2a8a-424c-b3a7-cf743d90a249" containerName="dnsmasq-dns" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817287 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817298 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3dfec36-0758-42c6-8c28-997044eb59a3" containerName="barbican-api-log" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817310 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="cinder-scheduler" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.817321 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f487872-4003-4559-8f72-1c6022321160" containerName="probe" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.818184 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.823726 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.870443 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.874981 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.897318 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.897466 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplvd\" (UniqueName: \"kubernetes.io/projected/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-kube-api-access-rplvd\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.897488 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.897535 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.897589 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.897607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.999221 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rplvd\" (UniqueName: \"kubernetes.io/projected/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-kube-api-access-rplvd\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.999269 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.999315 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.999369 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.999392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.999458 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:50 crc kubenswrapper[4931]: I0130 05:26:50.999492 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.004637 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.005881 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.012607 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.020984 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.020999 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rplvd\" (UniqueName: \"kubernetes.io/projected/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-kube-api-access-rplvd\") pod \"cinder-scheduler-0\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " pod="openstack/cinder-scheduler-0" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.021050 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.146564 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.267649 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.368231 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.419871 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d9d68b44b-5gp25"] Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.436742 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f487872-4003-4559-8f72-1c6022321160" path="/var/lib/kubelet/pods/1f487872-4003-4559-8f72-1c6022321160/volumes" Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.617181 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:26:51 crc kubenswrapper[4931]: I0130 05:26:51.757118 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd","Type":"ContainerStarted","Data":"300c4ac1a78a0898043a5bb9c0ea1e976d3646b2689510ee5ed5d0a93470d249"} Jan 30 05:26:52 crc kubenswrapper[4931]: I0130 05:26:52.766885 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d9d68b44b-5gp25" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-log" containerID="cri-o://133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21" gracePeriod=30 Jan 30 05:26:52 crc kubenswrapper[4931]: I0130 05:26:52.767053 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd","Type":"ContainerStarted","Data":"9398f7e713fb447b3a151d286be2d2910e4d8535fd421e906c46b8cc2c9a4728"} Jan 30 05:26:52 crc kubenswrapper[4931]: I0130 05:26:52.767403 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d9d68b44b-5gp25" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-api" containerID="cri-o://fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82" gracePeriod=30 Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.580616 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.640290 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc49c789d-5gcj4"] Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.640579 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc49c789d-5gcj4" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-api" containerID="cri-o://f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a" gracePeriod=30 Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.640653 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc49c789d-5gcj4" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-httpd" containerID="cri-o://e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2" gracePeriod=30 Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.776822 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd","Type":"ContainerStarted","Data":"571155fa2c4a4cc11bc78f96b7b5b636cdda183726d54338d2bf0cc02d77f003"} Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.778661 4931 generic.go:334] "Generic (PLEG): container finished" podID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerID="133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21" exitCode=143 Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.778697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9d68b44b-5gp25" event={"ID":"b92991ff-5b79-452a-b5ac-9dc90ab42f68","Type":"ContainerDied","Data":"133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21"} Jan 30 05:26:53 crc kubenswrapper[4931]: I0130 05:26:53.796692 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.796676233 podStartE2EDuration="3.796676233s" podCreationTimestamp="2026-01-30 05:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:53.794474365 +0000 UTC m=+1149.164384622" watchObservedRunningTime="2026-01-30 05:26:53.796676233 +0000 UTC m=+1149.166586490" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.791350 4931 generic.go:334] "Generic (PLEG): container finished" podID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerID="e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2" exitCode=0 Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.791412 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc49c789d-5gcj4" event={"ID":"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec","Type":"ContainerDied","Data":"e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2"} Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.791670 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.792940 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.794499 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lxhtr" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.795937 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.796896 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.800742 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.870581 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hxs9\" (UniqueName: \"kubernetes.io/projected/43c51602-467a-46a4-a7e5-898e988d56b4-kube-api-access-2hxs9\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.870858 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.870925 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.871072 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.973471 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.973535 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.973584 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.973677 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hxs9\" (UniqueName: \"kubernetes.io/projected/43c51602-467a-46a4-a7e5-898e988d56b4-kube-api-access-2hxs9\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.974636 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.979483 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config-secret\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.980947 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:54 crc kubenswrapper[4931]: I0130 05:26:54.999326 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hxs9\" (UniqueName: \"kubernetes.io/projected/43c51602-467a-46a4-a7e5-898e988d56b4-kube-api-access-2hxs9\") pod \"openstackclient\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.115244 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.135700 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.166961 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.193868 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.195213 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.204529 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.282485 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.282719 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.282780 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k758\" (UniqueName: \"kubernetes.io/projected/6b263e8e-7618-4044-bed1-b35174d6a8f4-kube-api-access-5k758\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.282824 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config-secret\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: E0130 05:26:55.325794 4931 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 30 05:26:55 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_43c51602-467a-46a4-a7e5-898e988d56b4_0(6fc7fa539391e55acf05b41581f8c91bef648896bfa36ae10f211c8f9c6c301a): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6fc7fa539391e55acf05b41581f8c91bef648896bfa36ae10f211c8f9c6c301a" Netns:"/var/run/netns/1e968b92-6b2c-4c2c-9be1-cc1e18187ac7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=6fc7fa539391e55acf05b41581f8c91bef648896bfa36ae10f211c8f9c6c301a;K8S_POD_UID=43c51602-467a-46a4-a7e5-898e988d56b4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/43c51602-467a-46a4-a7e5-898e988d56b4]: expected pod UID "43c51602-467a-46a4-a7e5-898e988d56b4" but got "6b263e8e-7618-4044-bed1-b35174d6a8f4" from Kube API Jan 30 05:26:55 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 05:26:55 crc kubenswrapper[4931]: > Jan 30 05:26:55 crc kubenswrapper[4931]: E0130 05:26:55.325862 4931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 30 05:26:55 crc kubenswrapper[4931]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_43c51602-467a-46a4-a7e5-898e988d56b4_0(6fc7fa539391e55acf05b41581f8c91bef648896bfa36ae10f211c8f9c6c301a): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6fc7fa539391e55acf05b41581f8c91bef648896bfa36ae10f211c8f9c6c301a" Netns:"/var/run/netns/1e968b92-6b2c-4c2c-9be1-cc1e18187ac7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=6fc7fa539391e55acf05b41581f8c91bef648896bfa36ae10f211c8f9c6c301a;K8S_POD_UID=43c51602-467a-46a4-a7e5-898e988d56b4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/43c51602-467a-46a4-a7e5-898e988d56b4]: expected pod UID "43c51602-467a-46a4-a7e5-898e988d56b4" but got "6b263e8e-7618-4044-bed1-b35174d6a8f4" from Kube API Jan 30 05:26:55 crc kubenswrapper[4931]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 30 05:26:55 crc kubenswrapper[4931]: > pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.384871 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.384915 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.384998 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k758\" (UniqueName: \"kubernetes.io/projected/6b263e8e-7618-4044-bed1-b35174d6a8f4-kube-api-access-5k758\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.385055 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config-secret\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.386060 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.390043 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.390297 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config-secret\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.404947 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k758\" (UniqueName: \"kubernetes.io/projected/6b263e8e-7618-4044-bed1-b35174d6a8f4-kube-api-access-5k758\") pod \"openstackclient\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.515108 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.680711 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-687c697484-j2btt_84203bc9-afb4-42cb-843d-c211490ce275/neutron-api/0.log" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.680777 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.791600 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-ovndb-tls-certs\") pod \"84203bc9-afb4-42cb-843d-c211490ce275\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.791897 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-config\") pod \"84203bc9-afb4-42cb-843d-c211490ce275\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.791919 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jwc6\" (UniqueName: \"kubernetes.io/projected/84203bc9-afb4-42cb-843d-c211490ce275-kube-api-access-6jwc6\") pod \"84203bc9-afb4-42cb-843d-c211490ce275\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.791962 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-httpd-config\") pod \"84203bc9-afb4-42cb-843d-c211490ce275\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.792133 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-combined-ca-bundle\") pod \"84203bc9-afb4-42cb-843d-c211490ce275\" (UID: \"84203bc9-afb4-42cb-843d-c211490ce275\") " Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.816825 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-687c697484-j2btt_84203bc9-afb4-42cb-843d-c211490ce275/neutron-api/0.log" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.816873 4931 generic.go:334] "Generic (PLEG): container finished" podID="84203bc9-afb4-42cb-843d-c211490ce275" containerID="56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e" exitCode=137 Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.816956 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.817000 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c697484-j2btt" event={"ID":"84203bc9-afb4-42cb-843d-c211490ce275","Type":"ContainerDied","Data":"56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e"} Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.817060 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-687c697484-j2btt" event={"ID":"84203bc9-afb4-42cb-843d-c211490ce275","Type":"ContainerDied","Data":"ae04b2bb5e94306ea64985be9c8f6deb0c75c6536be8008e597d586d1aee985e"} Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.817078 4931 scope.go:117] "RemoveContainer" containerID="14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.817089 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-687c697484-j2btt" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.820576 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "84203bc9-afb4-42cb-843d-c211490ce275" (UID: "84203bc9-afb4-42cb-843d-c211490ce275"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.835412 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84203bc9-afb4-42cb-843d-c211490ce275-kube-api-access-6jwc6" (OuterVolumeSpecName: "kube-api-access-6jwc6") pod "84203bc9-afb4-42cb-843d-c211490ce275" (UID: "84203bc9-afb4-42cb-843d-c211490ce275"). InnerVolumeSpecName "kube-api-access-6jwc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.838371 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.854632 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="43c51602-467a-46a4-a7e5-898e988d56b4" podUID="6b263e8e-7618-4044-bed1-b35174d6a8f4" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.864867 4931 scope.go:117] "RemoveContainer" containerID="56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.868552 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-config" (OuterVolumeSpecName: "config") pod "84203bc9-afb4-42cb-843d-c211490ce275" (UID: "84203bc9-afb4-42cb-843d-c211490ce275"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.883222 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "84203bc9-afb4-42cb-843d-c211490ce275" (UID: "84203bc9-afb4-42cb-843d-c211490ce275"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.887462 4931 scope.go:117] "RemoveContainer" containerID="14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.891525 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84203bc9-afb4-42cb-843d-c211490ce275" (UID: "84203bc9-afb4-42cb-843d-c211490ce275"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:55 crc kubenswrapper[4931]: E0130 05:26:55.891553 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625\": container with ID starting with 14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625 not found: ID does not exist" containerID="14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.891590 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625"} err="failed to get container status \"14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625\": rpc error: code = NotFound desc = could not find container \"14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625\": container with ID starting with 14ccbe55d693286bb36c8c2d9d691bc7dcb9c2a613cfa2888cedb7d4d536b625 not found: ID does not exist" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.891616 4931 scope.go:117] "RemoveContainer" containerID="56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e" Jan 30 05:26:55 crc kubenswrapper[4931]: E0130 05:26:55.893638 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e\": container with ID starting with 56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e not found: ID does not exist" containerID="56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.893697 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e"} err="failed to get container status \"56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e\": rpc error: code = NotFound desc = could not find container \"56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e\": container with ID starting with 56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e not found: ID does not exist" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.902860 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.902908 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.902920 4931 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.902928 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/84203bc9-afb4-42cb-843d-c211490ce275-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:55 crc kubenswrapper[4931]: I0130 05:26:55.902937 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jwc6\" (UniqueName: \"kubernetes.io/projected/84203bc9-afb4-42cb-843d-c211490ce275-kube-api-access-6jwc6\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.003740 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hxs9\" (UniqueName: \"kubernetes.io/projected/43c51602-467a-46a4-a7e5-898e988d56b4-kube-api-access-2hxs9\") pod \"43c51602-467a-46a4-a7e5-898e988d56b4\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.003891 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config\") pod \"43c51602-467a-46a4-a7e5-898e988d56b4\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.003912 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config-secret\") pod \"43c51602-467a-46a4-a7e5-898e988d56b4\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.003949 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-combined-ca-bundle\") pod \"43c51602-467a-46a4-a7e5-898e988d56b4\" (UID: \"43c51602-467a-46a4-a7e5-898e988d56b4\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.004848 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "43c51602-467a-46a4-a7e5-898e988d56b4" (UID: "43c51602-467a-46a4-a7e5-898e988d56b4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.009814 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c51602-467a-46a4-a7e5-898e988d56b4-kube-api-access-2hxs9" (OuterVolumeSpecName: "kube-api-access-2hxs9") pod "43c51602-467a-46a4-a7e5-898e988d56b4" (UID: "43c51602-467a-46a4-a7e5-898e988d56b4"). InnerVolumeSpecName "kube-api-access-2hxs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.011108 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43c51602-467a-46a4-a7e5-898e988d56b4" (UID: "43c51602-467a-46a4-a7e5-898e988d56b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.011319 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "43c51602-467a-46a4-a7e5-898e988d56b4" (UID: "43c51602-467a-46a4-a7e5-898e988d56b4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.086628 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.105494 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hxs9\" (UniqueName: \"kubernetes.io/projected/43c51602-467a-46a4-a7e5-898e988d56b4-kube-api-access-2hxs9\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.105522 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.105531 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.105539 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c51602-467a-46a4-a7e5-898e988d56b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.153726 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.189287 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-687c697484-j2btt"] Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.197434 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-687c697484-j2btt"] Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.327971 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.408968 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-internal-tls-certs\") pod \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.409012 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-public-tls-certs\") pod \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.409049 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-combined-ca-bundle\") pod \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.409085 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-scripts\") pod \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.409112 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-config-data\") pod \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.409156 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bz2z\" (UniqueName: \"kubernetes.io/projected/b92991ff-5b79-452a-b5ac-9dc90ab42f68-kube-api-access-9bz2z\") pod \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.409180 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92991ff-5b79-452a-b5ac-9dc90ab42f68-logs\") pod \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\" (UID: \"b92991ff-5b79-452a-b5ac-9dc90ab42f68\") " Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.409938 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92991ff-5b79-452a-b5ac-9dc90ab42f68-logs" (OuterVolumeSpecName: "logs") pod "b92991ff-5b79-452a-b5ac-9dc90ab42f68" (UID: "b92991ff-5b79-452a-b5ac-9dc90ab42f68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.417219 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-scripts" (OuterVolumeSpecName: "scripts") pod "b92991ff-5b79-452a-b5ac-9dc90ab42f68" (UID: "b92991ff-5b79-452a-b5ac-9dc90ab42f68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.417238 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92991ff-5b79-452a-b5ac-9dc90ab42f68-kube-api-access-9bz2z" (OuterVolumeSpecName: "kube-api-access-9bz2z") pod "b92991ff-5b79-452a-b5ac-9dc90ab42f68" (UID: "b92991ff-5b79-452a-b5ac-9dc90ab42f68"). InnerVolumeSpecName "kube-api-access-9bz2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.462274 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-config-data" (OuterVolumeSpecName: "config-data") pod "b92991ff-5b79-452a-b5ac-9dc90ab42f68" (UID: "b92991ff-5b79-452a-b5ac-9dc90ab42f68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.471046 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b92991ff-5b79-452a-b5ac-9dc90ab42f68" (UID: "b92991ff-5b79-452a-b5ac-9dc90ab42f68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.511215 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b92991ff-5b79-452a-b5ac-9dc90ab42f68" (UID: "b92991ff-5b79-452a-b5ac-9dc90ab42f68"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.511259 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.511280 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.511291 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.511302 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bz2z\" (UniqueName: \"kubernetes.io/projected/b92991ff-5b79-452a-b5ac-9dc90ab42f68-kube-api-access-9bz2z\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.511313 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b92991ff-5b79-452a-b5ac-9dc90ab42f68-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.546678 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b92991ff-5b79-452a-b5ac-9dc90ab42f68" (UID: "b92991ff-5b79-452a-b5ac-9dc90ab42f68"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.613316 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.613354 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b92991ff-5b79-452a-b5ac-9dc90ab42f68-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.826983 4931 generic.go:334] "Generic (PLEG): container finished" podID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerID="fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82" exitCode=0 Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.827050 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9d68b44b-5gp25" event={"ID":"b92991ff-5b79-452a-b5ac-9dc90ab42f68","Type":"ContainerDied","Data":"fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82"} Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.827082 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9d68b44b-5gp25" event={"ID":"b92991ff-5b79-452a-b5ac-9dc90ab42f68","Type":"ContainerDied","Data":"040b81795acd0bef7c76b7a99d650deaac66b5fa82f97baf669121be56928797"} Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.827101 4931 scope.go:117] "RemoveContainer" containerID="fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.827203 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9d68b44b-5gp25" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.846977 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.847028 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6b263e8e-7618-4044-bed1-b35174d6a8f4","Type":"ContainerStarted","Data":"14528f0946216f7b2b6764667e27591d86ee74bba97b6ce9081e9dedf29c1572"} Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.867508 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d9d68b44b-5gp25"] Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.867903 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="43c51602-467a-46a4-a7e5-898e988d56b4" podUID="6b263e8e-7618-4044-bed1-b35174d6a8f4" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.873903 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d9d68b44b-5gp25"] Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.891922 4931 scope.go:117] "RemoveContainer" containerID="133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.924565 4931 scope.go:117] "RemoveContainer" containerID="fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82" Jan 30 05:26:56 crc kubenswrapper[4931]: E0130 05:26:56.925504 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82\": container with ID starting with fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82 not found: ID does not exist" containerID="fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.925547 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82"} err="failed to get container status \"fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82\": rpc error: code = NotFound desc = could not find container \"fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82\": container with ID starting with fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82 not found: ID does not exist" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.925573 4931 scope.go:117] "RemoveContainer" containerID="133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21" Jan 30 05:26:56 crc kubenswrapper[4931]: E0130 05:26:56.927451 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21\": container with ID starting with 133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21 not found: ID does not exist" containerID="133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21" Jan 30 05:26:56 crc kubenswrapper[4931]: I0130 05:26:56.927484 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21"} err="failed to get container status \"133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21\": rpc error: code = NotFound desc = could not find container \"133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21\": container with ID starting with 133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21 not found: ID does not exist" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.362698 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.363024 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.363063 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.363627 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.363685 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f" gracePeriod=600 Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.434611 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c51602-467a-46a4-a7e5-898e988d56b4" path="/var/lib/kubelet/pods/43c51602-467a-46a4-a7e5-898e988d56b4/volumes" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.435113 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84203bc9-afb4-42cb-843d-c211490ce275" path="/var/lib/kubelet/pods/84203bc9-afb4-42cb-843d-c211490ce275/volumes" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.435881 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" path="/var/lib/kubelet/pods/b92991ff-5b79-452a-b5ac-9dc90ab42f68/volumes" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.712196 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.837959 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-combined-ca-bundle\") pod \"807d8709-a403-4186-83f5-ec76aee793fe\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.838037 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data\") pod \"807d8709-a403-4186-83f5-ec76aee793fe\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.838087 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807d8709-a403-4186-83f5-ec76aee793fe-logs\") pod \"807d8709-a403-4186-83f5-ec76aee793fe\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.838164 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrvbk\" (UniqueName: \"kubernetes.io/projected/807d8709-a403-4186-83f5-ec76aee793fe-kube-api-access-jrvbk\") pod \"807d8709-a403-4186-83f5-ec76aee793fe\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.838218 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data-custom\") pod \"807d8709-a403-4186-83f5-ec76aee793fe\" (UID: \"807d8709-a403-4186-83f5-ec76aee793fe\") " Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.841747 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807d8709-a403-4186-83f5-ec76aee793fe-logs" (OuterVolumeSpecName: "logs") pod "807d8709-a403-4186-83f5-ec76aee793fe" (UID: "807d8709-a403-4186-83f5-ec76aee793fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.861605 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "807d8709-a403-4186-83f5-ec76aee793fe" (UID: "807d8709-a403-4186-83f5-ec76aee793fe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.863637 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807d8709-a403-4186-83f5-ec76aee793fe-kube-api-access-jrvbk" (OuterVolumeSpecName: "kube-api-access-jrvbk") pod "807d8709-a403-4186-83f5-ec76aee793fe" (UID: "807d8709-a403-4186-83f5-ec76aee793fe"). InnerVolumeSpecName "kube-api-access-jrvbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.897463 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "807d8709-a403-4186-83f5-ec76aee793fe" (UID: "807d8709-a403-4186-83f5-ec76aee793fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.900565 4931 generic.go:334] "Generic (PLEG): container finished" podID="807d8709-a403-4186-83f5-ec76aee793fe" containerID="4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0" exitCode=137 Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.900615 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" event={"ID":"807d8709-a403-4186-83f5-ec76aee793fe","Type":"ContainerDied","Data":"4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0"} Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.900640 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" event={"ID":"807d8709-a403-4186-83f5-ec76aee793fe","Type":"ContainerDied","Data":"fc0c653d3e574db62881709b302919c961837f9a8fc28421f26c150c1cbda477"} Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.900656 4931 scope.go:117] "RemoveContainer" containerID="4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.900741 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f7d589966-mkfs5" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.910356 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f" exitCode=0 Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.910391 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f"} Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.910432 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"083c2726f719c1b6c228fc0d209a309a403985263c1ced3ea0982529442fd973"} Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.933601 4931 scope.go:117] "RemoveContainer" containerID="74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.936010 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data" (OuterVolumeSpecName: "config-data") pod "807d8709-a403-4186-83f5-ec76aee793fe" (UID: "807d8709-a403-4186-83f5-ec76aee793fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.941072 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrvbk\" (UniqueName: \"kubernetes.io/projected/807d8709-a403-4186-83f5-ec76aee793fe-kube-api-access-jrvbk\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.941100 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.941110 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.941120 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807d8709-a403-4186-83f5-ec76aee793fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.941129 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807d8709-a403-4186-83f5-ec76aee793fe-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.970582 4931 scope.go:117] "RemoveContainer" containerID="4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0" Jan 30 05:26:57 crc kubenswrapper[4931]: E0130 05:26:57.971965 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0\": container with ID starting with 4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0 not found: ID does not exist" containerID="4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.971999 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0"} err="failed to get container status \"4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0\": rpc error: code = NotFound desc = could not find container \"4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0\": container with ID starting with 4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0 not found: ID does not exist" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.972021 4931 scope.go:117] "RemoveContainer" containerID="74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6" Jan 30 05:26:57 crc kubenswrapper[4931]: E0130 05:26:57.972382 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6\": container with ID starting with 74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6 not found: ID does not exist" containerID="74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.972401 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6"} err="failed to get container status \"74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6\": rpc error: code = NotFound desc = could not find container \"74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6\": container with ID starting with 74b2b389570e4a1035ef2e2c5f7eb1cceb4dcfec00b6a918fb6f7e2900be1dd6 not found: ID does not exist" Jan 30 05:26:57 crc kubenswrapper[4931]: I0130 05:26:57.972412 4931 scope.go:117] "RemoveContainer" containerID="60aa2b4543ca7f8bb21bef3c167f0da099829d6ddc544f185e09f8c4de74ad75" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.247848 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-f7d589966-mkfs5"] Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.261908 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-76fb878d5c-s22sw"] Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.266197 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener-log" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266222 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener-log" Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.266233 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-log" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266239 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-log" Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.266255 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-api" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266260 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-api" Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.266270 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-api" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266276 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-api" Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.266287 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-httpd" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266293 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-httpd" Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.266304 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266309 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266569 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266584 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-httpd" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266591 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="807d8709-a403-4186-83f5-ec76aee793fe" containerName="barbican-keystone-listener-log" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266598 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="84203bc9-afb4-42cb-843d-c211490ce275" containerName="neutron-api" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266605 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-log" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.266613 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92991ff-5b79-452a-b5ac-9dc90ab42f68" containerName="placement-api" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.267540 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.273351 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.273615 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.278236 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.283691 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-f7d589966-mkfs5"] Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.297293 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76fb878d5c-s22sw"] Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346503 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-public-tls-certs\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346567 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-log-httpd\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346594 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-combined-ca-bundle\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346642 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxq8\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-kube-api-access-dsxq8\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346661 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-run-httpd\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346686 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-config-data\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346712 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.346750 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-internal-tls-certs\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.448900 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-combined-ca-bundle\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449234 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxq8\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-kube-api-access-dsxq8\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449262 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-run-httpd\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449294 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-config-data\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449332 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449372 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-internal-tls-certs\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449443 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-public-tls-certs\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449481 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-log-httpd\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449910 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-run-httpd\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.449919 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-log-httpd\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.455142 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-public-tls-certs\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.455831 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-combined-ca-bundle\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.456061 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.463156 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-config-data\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.464900 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxq8\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-kube-api-access-dsxq8\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.471357 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-internal-tls-certs\") pod \"swift-proxy-76fb878d5c-s22sw\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.557976 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.590917 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.651842 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-combined-ca-bundle\") pod \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.651941 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-ovndb-tls-certs\") pod \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.651970 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-config\") pod \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.651989 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-httpd-config\") pod \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.652075 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgbtk\" (UniqueName: \"kubernetes.io/projected/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-kube-api-access-fgbtk\") pod \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\" (UID: \"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec\") " Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.657801 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-kube-api-access-fgbtk" (OuterVolumeSpecName: "kube-api-access-fgbtk") pod "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" (UID: "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec"). InnerVolumeSpecName "kube-api-access-fgbtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.664380 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" (UID: "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.705843 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" (UID: "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.745094 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" (UID: "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.746521 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-config" (OuterVolumeSpecName: "config") pod "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" (UID: "7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.756080 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.756109 4931 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.756118 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.756128 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.756137 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgbtk\" (UniqueName: \"kubernetes.io/projected/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec-kube-api-access-fgbtk\") on node \"crc\" DevicePath \"\"" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.924161 4931 generic.go:334] "Generic (PLEG): container finished" podID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerID="f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a" exitCode=0 Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.924355 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc49c789d-5gcj4" event={"ID":"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec","Type":"ContainerDied","Data":"f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a"} Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.925159 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc49c789d-5gcj4" event={"ID":"7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec","Type":"ContainerDied","Data":"49b94c209fcd846b366cb60120c52ee63d74a76288f62e76634d76df2ff577f1"} Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.924467 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc49c789d-5gcj4" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.925239 4931 scope.go:117] "RemoveContainer" containerID="e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.955586 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc49c789d-5gcj4"] Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.958745 4931 scope.go:117] "RemoveContainer" containerID="f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.963668 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dc49c789d-5gcj4"] Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.978561 4931 scope.go:117] "RemoveContainer" containerID="e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2" Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.978892 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2\": container with ID starting with e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2 not found: ID does not exist" containerID="e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.978985 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2"} err="failed to get container status \"e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2\": rpc error: code = NotFound desc = could not find container \"e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2\": container with ID starting with e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2 not found: ID does not exist" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.979084 4931 scope.go:117] "RemoveContainer" containerID="f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a" Jan 30 05:26:58 crc kubenswrapper[4931]: E0130 05:26:58.979516 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a\": container with ID starting with f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a not found: ID does not exist" containerID="f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a" Jan 30 05:26:58 crc kubenswrapper[4931]: I0130 05:26:58.979547 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a"} err="failed to get container status \"f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a\": rpc error: code = NotFound desc = could not find container \"f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a\": container with ID starting with f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a not found: ID does not exist" Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.180547 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-76fb878d5c-s22sw"] Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.384479 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.384890 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-central-agent" containerID="cri-o://08e9531045b4ed348a8e15f9e06ca988f965f1f605cc60da77be0b95272ee581" gracePeriod=30 Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.385014 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="proxy-httpd" containerID="cri-o://9e8bac5316b66891bb65008c7a82aba4d5b92fb6001c63ba8c49a06e95040b7f" gracePeriod=30 Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.385043 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="sg-core" containerID="cri-o://b2d093b072c9cdec044442a6734371a3413de8a6cea48d10abb06780f1cd4e75" gracePeriod=30 Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.385181 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-notification-agent" containerID="cri-o://f4f164366b57c885cd381de9f7095a47cecfe7af0c8f7b404360ca7dbbfa150d" gracePeriod=30 Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.389094 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.443502 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" path="/var/lib/kubelet/pods/7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec/volumes" Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.449949 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807d8709-a403-4186-83f5-ec76aee793fe" path="/var/lib/kubelet/pods/807d8709-a403-4186-83f5-ec76aee793fe/volumes" Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.942091 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76fb878d5c-s22sw" event={"ID":"98d21216-5a0f-422c-9642-0ea353a33e82","Type":"ContainerStarted","Data":"02a426537f79889d684c812318ea1dd0bd0af03a098fb1e7d47cd94e43353e1c"} Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.942733 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76fb878d5c-s22sw" event={"ID":"98d21216-5a0f-422c-9642-0ea353a33e82","Type":"ContainerStarted","Data":"3baca7478354f0fb4066c265761c1fd4465b993347daea084eeb2d40cd40bed6"} Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.942756 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.942770 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.942782 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76fb878d5c-s22sw" event={"ID":"98d21216-5a0f-422c-9642-0ea353a33e82","Type":"ContainerStarted","Data":"68d0e2dfe8dc67ba7ff79544ecf0a950e34ec34379d61e5a1edf698fb315e6f7"} Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.948506 4931 generic.go:334] "Generic (PLEG): container finished" podID="e5755369-fc75-443e-b608-996b7212ac94" containerID="9e8bac5316b66891bb65008c7a82aba4d5b92fb6001c63ba8c49a06e95040b7f" exitCode=0 Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.948533 4931 generic.go:334] "Generic (PLEG): container finished" podID="e5755369-fc75-443e-b608-996b7212ac94" containerID="b2d093b072c9cdec044442a6734371a3413de8a6cea48d10abb06780f1cd4e75" exitCode=2 Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.948547 4931 generic.go:334] "Generic (PLEG): container finished" podID="e5755369-fc75-443e-b608-996b7212ac94" containerID="08e9531045b4ed348a8e15f9e06ca988f965f1f605cc60da77be0b95272ee581" exitCode=0 Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.948566 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerDied","Data":"9e8bac5316b66891bb65008c7a82aba4d5b92fb6001c63ba8c49a06e95040b7f"} Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.948586 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerDied","Data":"b2d093b072c9cdec044442a6734371a3413de8a6cea48d10abb06780f1cd4e75"} Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.948601 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerDied","Data":"08e9531045b4ed348a8e15f9e06ca988f965f1f605cc60da77be0b95272ee581"} Jan 30 05:26:59 crc kubenswrapper[4931]: I0130 05:26:59.972183 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-76fb878d5c-s22sw" podStartSLOduration=1.9721654659999999 podStartE2EDuration="1.972165466s" podCreationTimestamp="2026-01-30 05:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:26:59.964356763 +0000 UTC m=+1155.334267030" watchObservedRunningTime="2026-01-30 05:26:59.972165466 +0000 UTC m=+1155.342075723" Jan 30 05:27:01 crc kubenswrapper[4931]: I0130 05:27:01.351886 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 05:27:01 crc kubenswrapper[4931]: I0130 05:27:01.971273 4931 generic.go:334] "Generic (PLEG): container finished" podID="e5755369-fc75-443e-b608-996b7212ac94" containerID="f4f164366b57c885cd381de9f7095a47cecfe7af0c8f7b404360ca7dbbfa150d" exitCode=0 Jan 30 05:27:01 crc kubenswrapper[4931]: I0130 05:27:01.971345 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerDied","Data":"f4f164366b57c885cd381de9f7095a47cecfe7af0c8f7b404360ca7dbbfa150d"} Jan 30 05:27:04 crc kubenswrapper[4931]: I0130 05:27:04.389031 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:27:04 crc kubenswrapper[4931]: I0130 05:27:04.389478 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" containerName="kube-state-metrics" containerID="cri-o://b4e72393e02c3e0619fa1e4bea6d0742ffbef4de8775e7d96ccdc8545af19acb" gracePeriod=30 Jan 30 05:27:05 crc kubenswrapper[4931]: I0130 05:27:05.001567 4931 generic.go:334] "Generic (PLEG): container finished" podID="75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" containerID="b4e72393e02c3e0619fa1e4bea6d0742ffbef4de8775e7d96ccdc8545af19acb" exitCode=2 Jan 30 05:27:05 crc kubenswrapper[4931]: I0130 05:27:05.001789 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2","Type":"ContainerDied","Data":"b4e72393e02c3e0619fa1e4bea6d0742ffbef4de8775e7d96ccdc8545af19acb"} Jan 30 05:27:05 crc kubenswrapper[4931]: I0130 05:27:05.371140 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": dial tcp 10.217.0.106:8081: connect: connection refused" Jan 30 05:27:05 crc kubenswrapper[4931]: I0130 05:27:05.502328 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:27:05 crc kubenswrapper[4931]: I0130 05:27:05.502877 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-log" containerID="cri-o://c3997c6bebfa178d2d159b7c46082a54c7bc989ec2ee2d507189f6cfa3f09d57" gracePeriod=30 Jan 30 05:27:05 crc kubenswrapper[4931]: I0130 05:27:05.502941 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-httpd" containerID="cri-o://4dc106a9347c18b30457fcfe0ba0955c89ca0037e4655a069609d973aaa2c8d2" gracePeriod=30 Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.010129 4931 generic.go:334] "Generic (PLEG): container finished" podID="97f44787-3f37-44f1-85a5-4acffef71d95" containerID="c3997c6bebfa178d2d159b7c46082a54c7bc989ec2ee2d507189f6cfa3f09d57" exitCode=143 Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.010172 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97f44787-3f37-44f1-85a5-4acffef71d95","Type":"ContainerDied","Data":"c3997c6bebfa178d2d159b7c46082a54c7bc989ec2ee2d507189f6cfa3f09d57"} Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.847988 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.909880 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2j42\" (UniqueName: \"kubernetes.io/projected/e5755369-fc75-443e-b608-996b7212ac94-kube-api-access-x2j42\") pod \"e5755369-fc75-443e-b608-996b7212ac94\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.909931 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-run-httpd\") pod \"e5755369-fc75-443e-b608-996b7212ac94\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.910013 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-log-httpd\") pod \"e5755369-fc75-443e-b608-996b7212ac94\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.910034 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-sg-core-conf-yaml\") pod \"e5755369-fc75-443e-b608-996b7212ac94\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.910054 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-config-data\") pod \"e5755369-fc75-443e-b608-996b7212ac94\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.910104 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-scripts\") pod \"e5755369-fc75-443e-b608-996b7212ac94\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.910142 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-combined-ca-bundle\") pod \"e5755369-fc75-443e-b608-996b7212ac94\" (UID: \"e5755369-fc75-443e-b608-996b7212ac94\") " Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.911788 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e5755369-fc75-443e-b608-996b7212ac94" (UID: "e5755369-fc75-443e-b608-996b7212ac94"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.911807 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e5755369-fc75-443e-b608-996b7212ac94" (UID: "e5755369-fc75-443e-b608-996b7212ac94"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.921478 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5755369-fc75-443e-b608-996b7212ac94-kube-api-access-x2j42" (OuterVolumeSpecName: "kube-api-access-x2j42") pod "e5755369-fc75-443e-b608-996b7212ac94" (UID: "e5755369-fc75-443e-b608-996b7212ac94"). InnerVolumeSpecName "kube-api-access-x2j42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.921663 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-scripts" (OuterVolumeSpecName: "scripts") pod "e5755369-fc75-443e-b608-996b7212ac94" (UID: "e5755369-fc75-443e-b608-996b7212ac94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.952725 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e5755369-fc75-443e-b608-996b7212ac94" (UID: "e5755369-fc75-443e-b608-996b7212ac94"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:06 crc kubenswrapper[4931]: I0130 05:27:06.963459 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.009801 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5755369-fc75-443e-b608-996b7212ac94" (UID: "e5755369-fc75-443e-b608-996b7212ac94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.011832 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2j42\" (UniqueName: \"kubernetes.io/projected/e5755369-fc75-443e-b608-996b7212ac94-kube-api-access-x2j42\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.011861 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.011870 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5755369-fc75-443e-b608-996b7212ac94-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.011878 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.011886 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.011895 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.030462 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6b263e8e-7618-4044-bed1-b35174d6a8f4","Type":"ContainerStarted","Data":"998f26954c016e9a4be4fed72f68f879a1f7793c171311545d8f4958871325fd"} Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.038496 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5755369-fc75-443e-b608-996b7212ac94","Type":"ContainerDied","Data":"ea5ec3936bd62e44ba566c0aec793fe2ef89fc6023d7cad6b0242228b7b8d07a"} Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.038539 4931 scope.go:117] "RemoveContainer" containerID="9e8bac5316b66891bb65008c7a82aba4d5b92fb6001c63ba8c49a06e95040b7f" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.038664 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.042243 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2","Type":"ContainerDied","Data":"54dbd562a66dec0d4b1d17dc98e849f8bad3b54d165bc61c92a13695e75f4ae7"} Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.042297 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.049010 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-config-data" (OuterVolumeSpecName: "config-data") pod "e5755369-fc75-443e-b608-996b7212ac94" (UID: "e5755369-fc75-443e-b608-996b7212ac94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.054250 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.552493659 podStartE2EDuration="12.054226109s" podCreationTimestamp="2026-01-30 05:26:55 +0000 UTC" firstStartedPulling="2026-01-30 05:26:56.111731983 +0000 UTC m=+1151.481642240" lastFinishedPulling="2026-01-30 05:27:06.613464433 +0000 UTC m=+1161.983374690" observedRunningTime="2026-01-30 05:27:07.049296715 +0000 UTC m=+1162.419206972" watchObservedRunningTime="2026-01-30 05:27:07.054226109 +0000 UTC m=+1162.424136366" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.066355 4931 scope.go:117] "RemoveContainer" containerID="b2d093b072c9cdec044442a6734371a3413de8a6cea48d10abb06780f1cd4e75" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.093090 4931 scope.go:117] "RemoveContainer" containerID="f4f164366b57c885cd381de9f7095a47cecfe7af0c8f7b404360ca7dbbfa150d" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.114173 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqss9\" (UniqueName: \"kubernetes.io/projected/75e7b62f-8246-48b8-bcbb-d7c5129dd5e2-kube-api-access-tqss9\") pod \"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2\" (UID: \"75e7b62f-8246-48b8-bcbb-d7c5129dd5e2\") " Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.115339 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5755369-fc75-443e-b608-996b7212ac94-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.115696 4931 scope.go:117] "RemoveContainer" containerID="08e9531045b4ed348a8e15f9e06ca988f965f1f605cc60da77be0b95272ee581" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.118794 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e7b62f-8246-48b8-bcbb-d7c5129dd5e2-kube-api-access-tqss9" (OuterVolumeSpecName: "kube-api-access-tqss9") pod "75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" (UID: "75e7b62f-8246-48b8-bcbb-d7c5129dd5e2"). InnerVolumeSpecName "kube-api-access-tqss9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.135690 4931 scope.go:117] "RemoveContainer" containerID="b4e72393e02c3e0619fa1e4bea6d0742ffbef4de8775e7d96ccdc8545af19acb" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.216647 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqss9\" (UniqueName: \"kubernetes.io/projected/75e7b62f-8246-48b8-bcbb-d7c5129dd5e2-kube-api-access-tqss9\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.376006 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.385907 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.396769 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.404400 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.414863 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.415235 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="sg-core" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415251 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="sg-core" Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.415265 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-central-agent" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415271 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-central-agent" Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.415280 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="proxy-httpd" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415287 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="proxy-httpd" Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.415293 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-notification-agent" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415301 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-notification-agent" Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.415316 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-api" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415321 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-api" Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.415333 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" containerName="kube-state-metrics" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415339 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" containerName="kube-state-metrics" Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.415358 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-httpd" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415363 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-httpd" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415588 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="proxy-httpd" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415602 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-central-agent" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415612 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" containerName="kube-state-metrics" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415625 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="ceilometer-notification-agent" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415634 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5755369-fc75-443e-b608-996b7212ac94" containerName="sg-core" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415644 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-httpd" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.415654 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c600b2f-8fcd-402b-bd79-9d64f8d1f1ec" containerName="neutron-api" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.417135 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.419286 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-b6t4t" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.419531 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.419885 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.423085 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.432302 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e7b62f-8246-48b8-bcbb-d7c5129dd5e2" path="/var/lib/kubelet/pods/75e7b62f-8246-48b8-bcbb-d7c5129dd5e2/volumes" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.434445 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5755369-fc75-443e-b608-996b7212ac94" path="/var/lib/kubelet/pods/e5755369-fc75-443e-b608-996b7212ac94/volumes" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.435818 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.437591 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.438147 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.443814 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.443827 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.448120 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521667 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-log-httpd\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521732 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-scripts\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521768 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521797 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521828 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-config-data\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521854 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521918 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-run-httpd\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521943 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znwrn\" (UniqueName: \"kubernetes.io/projected/30f9b591-fea6-4010-99db-45eef2237cdc-kube-api-access-znwrn\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.521979 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.522034 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.522076 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwgb\" (UniqueName: \"kubernetes.io/projected/e42548a3-5a7b-4f5b-8b13-8b5746710618-kube-api-access-knwgb\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.623856 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwgb\" (UniqueName: \"kubernetes.io/projected/e42548a3-5a7b-4f5b-8b13-8b5746710618-kube-api-access-knwgb\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.623978 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-log-httpd\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624039 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-scripts\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624082 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624123 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624163 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-config-data\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624202 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624261 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624303 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-run-httpd\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624346 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znwrn\" (UniqueName: \"kubernetes.io/projected/30f9b591-fea6-4010-99db-45eef2237cdc-kube-api-access-znwrn\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624403 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.624523 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.625554 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-log-httpd\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.625648 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-run-httpd\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.631941 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:07 crc kubenswrapper[4931]: E0130 05:27:07.632708 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-knwgb scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="e42548a3-5a7b-4f5b-8b13-8b5746710618" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.634235 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.635589 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.636498 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.636619 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.640443 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.640648 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-scripts\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.641876 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.648267 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwgb\" (UniqueName: \"kubernetes.io/projected/e42548a3-5a7b-4f5b-8b13-8b5746710618-kube-api-access-knwgb\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.648627 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-config-data\") pod \"ceilometer-0\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " pod="openstack/ceilometer-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.656239 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znwrn\" (UniqueName: \"kubernetes.io/projected/30f9b591-fea6-4010-99db-45eef2237cdc-kube-api-access-znwrn\") pod \"kube-state-metrics-0\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " pod="openstack/kube-state-metrics-0" Jan 30 05:27:07 crc kubenswrapper[4931]: I0130 05:27:07.754428 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.057780 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.070195 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133198 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-scripts\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133331 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-sg-core-conf-yaml\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133404 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-config-data\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133500 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-combined-ca-bundle\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133546 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-log-httpd\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133631 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-ceilometer-tls-certs\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133702 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwgb\" (UniqueName: \"kubernetes.io/projected/e42548a3-5a7b-4f5b-8b13-8b5746710618-kube-api-access-knwgb\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.133744 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-run-httpd\") pod \"e42548a3-5a7b-4f5b-8b13-8b5746710618\" (UID: \"e42548a3-5a7b-4f5b-8b13-8b5746710618\") " Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.134377 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.136544 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.139726 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-config-data" (OuterVolumeSpecName: "config-data") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.141264 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.142323 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.142804 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42548a3-5a7b-4f5b-8b13-8b5746710618-kube-api-access-knwgb" (OuterVolumeSpecName: "kube-api-access-knwgb") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "kube-api-access-knwgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.144933 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-scripts" (OuterVolumeSpecName: "scripts") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.146555 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e42548a3-5a7b-4f5b-8b13-8b5746710618" (UID: "e42548a3-5a7b-4f5b-8b13-8b5746710618"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236352 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236377 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236387 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236395 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236404 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236413 4931 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42548a3-5a7b-4f5b-8b13-8b5746710618-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236432 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwgb\" (UniqueName: \"kubernetes.io/projected/e42548a3-5a7b-4f5b-8b13-8b5746710618-kube-api-access-knwgb\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.236439 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e42548a3-5a7b-4f5b-8b13-8b5746710618-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.260459 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.287647 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vbzqc"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.288630 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.301106 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vbzqc"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.374145 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xvdtt"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.375095 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.390666 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0120-account-create-update-dptmf"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.398980 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.401960 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.408675 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xvdtt"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.424568 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0120-account-create-update-dptmf"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.440375 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cbb37a-882a-46cf-9cee-0543ac708004-operator-scripts\") pod \"nova-cell0-db-create-xvdtt\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.440597 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5hz\" (UniqueName: \"kubernetes.io/projected/d5cbb37a-882a-46cf-9cee-0543ac708004-kube-api-access-kt5hz\") pod \"nova-cell0-db-create-xvdtt\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.440667 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-operator-scripts\") pod \"nova-api-db-create-vbzqc\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.440734 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfnc\" (UniqueName: \"kubernetes.io/projected/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-kube-api-access-jlfnc\") pod \"nova-api-db-create-vbzqc\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.485196 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-x4mqp"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.486457 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.491342 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x4mqp"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.541867 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cbb37a-882a-46cf-9cee-0543ac708004-operator-scripts\") pod \"nova-cell0-db-create-xvdtt\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542155 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8624816-8c2c-4d9c-b3a5-426253850926-operator-scripts\") pod \"nova-cell1-db-create-x4mqp\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542188 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5hz\" (UniqueName: \"kubernetes.io/projected/d5cbb37a-882a-46cf-9cee-0543ac708004-kube-api-access-kt5hz\") pod \"nova-cell0-db-create-xvdtt\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542216 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-operator-scripts\") pod \"nova-api-db-create-vbzqc\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542246 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2bbp\" (UniqueName: \"kubernetes.io/projected/e8624816-8c2c-4d9c-b3a5-426253850926-kube-api-access-b2bbp\") pod \"nova-cell1-db-create-x4mqp\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542285 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlfnc\" (UniqueName: \"kubernetes.io/projected/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-kube-api-access-jlfnc\") pod \"nova-api-db-create-vbzqc\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdss\" (UniqueName: \"kubernetes.io/projected/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-kube-api-access-mkdss\") pod \"nova-api-0120-account-create-update-dptmf\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542368 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-operator-scripts\") pod \"nova-api-0120-account-create-update-dptmf\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542624 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cbb37a-882a-46cf-9cee-0543ac708004-operator-scripts\") pod \"nova-cell0-db-create-xvdtt\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.542865 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-operator-scripts\") pod \"nova-api-db-create-vbzqc\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.566165 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5hz\" (UniqueName: \"kubernetes.io/projected/d5cbb37a-882a-46cf-9cee-0543ac708004-kube-api-access-kt5hz\") pod \"nova-cell0-db-create-xvdtt\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.570106 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlfnc\" (UniqueName: \"kubernetes.io/projected/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-kube-api-access-jlfnc\") pod \"nova-api-db-create-vbzqc\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.592201 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-vfdzl"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.593571 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.595287 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.613374 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.614928 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-vfdzl"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.619654 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.622971 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.649145 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8624816-8c2c-4d9c-b3a5-426253850926-operator-scripts\") pod \"nova-cell1-db-create-x4mqp\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.649362 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2bbp\" (UniqueName: \"kubernetes.io/projected/e8624816-8c2c-4d9c-b3a5-426253850926-kube-api-access-b2bbp\") pod \"nova-cell1-db-create-x4mqp\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.649549 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053ccacf-d473-49f5-89e5-545a753e5e03-operator-scripts\") pod \"nova-cell0-10f6-account-create-update-vfdzl\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.649654 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdss\" (UniqueName: \"kubernetes.io/projected/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-kube-api-access-mkdss\") pod \"nova-api-0120-account-create-update-dptmf\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.649787 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-operator-scripts\") pod \"nova-api-0120-account-create-update-dptmf\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.650042 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89msj\" (UniqueName: \"kubernetes.io/projected/053ccacf-d473-49f5-89e5-545a753e5e03-kube-api-access-89msj\") pod \"nova-cell0-10f6-account-create-update-vfdzl\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.651645 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8624816-8c2c-4d9c-b3a5-426253850926-operator-scripts\") pod \"nova-cell1-db-create-x4mqp\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.654565 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-operator-scripts\") pod \"nova-api-0120-account-create-update-dptmf\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.685685 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2bbp\" (UniqueName: \"kubernetes.io/projected/e8624816-8c2c-4d9c-b3a5-426253850926-kube-api-access-b2bbp\") pod \"nova-cell1-db-create-x4mqp\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.687255 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdss\" (UniqueName: \"kubernetes.io/projected/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-kube-api-access-mkdss\") pod \"nova-api-0120-account-create-update-dptmf\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.704442 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.721065 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.751355 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053ccacf-d473-49f5-89e5-545a753e5e03-operator-scripts\") pod \"nova-cell0-10f6-account-create-update-vfdzl\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.751477 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89msj\" (UniqueName: \"kubernetes.io/projected/053ccacf-d473-49f5-89e5-545a753e5e03-kube-api-access-89msj\") pod \"nova-cell0-10f6-account-create-update-vfdzl\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.752789 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053ccacf-d473-49f5-89e5-545a753e5e03-operator-scripts\") pod \"nova-cell0-10f6-account-create-update-vfdzl\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.806720 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89msj\" (UniqueName: \"kubernetes.io/projected/053ccacf-d473-49f5-89e5-545a753e5e03-kube-api-access-89msj\") pod \"nova-cell0-10f6-account-create-update-vfdzl\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.814230 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-326d-account-create-update-rvcsw"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.815578 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.818083 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.821586 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-rvcsw"] Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.850286 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.861197 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff3c7ac-e403-4826-bf45-a6bed05570b7-operator-scripts\") pod \"nova-cell1-326d-account-create-update-rvcsw\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.861379 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.861391 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz255\" (UniqueName: \"kubernetes.io/projected/1ff3c7ac-e403-4826-bf45-a6bed05570b7-kube-api-access-xz255\") pod \"nova-cell1-326d-account-create-update-rvcsw\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.962819 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff3c7ac-e403-4826-bf45-a6bed05570b7-operator-scripts\") pod \"nova-cell1-326d-account-create-update-rvcsw\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.962935 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz255\" (UniqueName: \"kubernetes.io/projected/1ff3c7ac-e403-4826-bf45-a6bed05570b7-kube-api-access-xz255\") pod \"nova-cell1-326d-account-create-update-rvcsw\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.964771 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff3c7ac-e403-4826-bf45-a6bed05570b7-operator-scripts\") pod \"nova-cell1-326d-account-create-update-rvcsw\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:08 crc kubenswrapper[4931]: I0130 05:27:08.980960 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz255\" (UniqueName: \"kubernetes.io/projected/1ff3c7ac-e403-4826-bf45-a6bed05570b7-kube-api-access-xz255\") pod \"nova-cell1-326d-account-create-update-rvcsw\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:09 crc kubenswrapper[4931]: E0130 05:27:09.003843 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92991ff_5b79_452a_b5ac_9dc90ab42f68.slice/crio-conmon-fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c600b2f_8fcd_402b_bd79_9d64f8d1f1ec.slice/crio-49b94c209fcd846b366cb60120c52ee63d74a76288f62e76634d76df2ff577f1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c600b2f_8fcd_402b_bd79_9d64f8d1f1ec.slice/crio-e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92991ff_5b79_452a_b5ac_9dc90ab42f68.slice/crio-conmon-133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84203bc9_afb4_42cb_843d_c211490ce275.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice/crio-conmon-08e9531045b4ed348a8e15f9e06ca988f965f1f605cc60da77be0b95272ee581.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c600b2f_8fcd_402b_bd79_9d64f8d1f1ec.slice/crio-conmon-e15def09885e74ede7250dabc482871e820b4af404030d975622af536ceb70c2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c600b2f_8fcd_402b_bd79_9d64f8d1f1ec.slice/crio-f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice/crio-conmon-9e8bac5316b66891bb65008c7a82aba4d5b92fb6001c63ba8c49a06e95040b7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97f44787_3f37_44f1_85a5_4acffef71d95.slice/crio-4dc106a9347c18b30457fcfe0ba0955c89ca0037e4655a069609d973aaa2c8d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c600b2f_8fcd_402b_bd79_9d64f8d1f1ec.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92991ff_5b79_452a_b5ac_9dc90ab42f68.slice/crio-fd03896a043ef7af736cea0542c13e92c9e0f2ad27eb023f4ee70e94e5161f82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice/crio-08e9531045b4ed348a8e15f9e06ca988f965f1f605cc60da77be0b95272ee581.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84203bc9_afb4_42cb_843d_c211490ce275.slice/crio-ae04b2bb5e94306ea64985be9c8f6deb0c75c6536be8008e597d586d1aee985e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6bc53b_31f7_4650_aab3_d4bcf8b685ab.slice/crio-conmon-6b99333c2447cf347ded741a740d78480c65d8cae2c155c1e36dafd5c5578db3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189be3dc_d439_47c2_b1f2_7413fc4b5e85.slice/crio-conmon-a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod807d8709_a403_4186_83f5_ec76aee793fe.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c600b2f_8fcd_402b_bd79_9d64f8d1f1ec.slice/crio-conmon-f166ba6e75d463bf521caced327fdcd398e366003e50939372ecbc0798e6412a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf6bc53b_31f7_4650_aab3_d4bcf8b685ab.slice/crio-6b99333c2447cf347ded741a740d78480c65d8cae2c155c1e36dafd5c5578db3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75e7b62f_8246_48b8_bcbb_d7c5129dd5e2.slice/crio-54dbd562a66dec0d4b1d17dc98e849f8bad3b54d165bc61c92a13695e75f4ae7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod807d8709_a403_4186_83f5_ec76aee793fe.slice/crio-conmon-4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189be3dc_d439_47c2_b1f2_7413fc4b5e85.slice/crio-a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod807d8709_a403_4186_83f5_ec76aee793fe.slice/crio-fc0c653d3e574db62881709b302919c961837f9a8fc28421f26c150c1cbda477\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice/crio-conmon-f4f164366b57c885cd381de9f7095a47cecfe7af0c8f7b404360ca7dbbfa150d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84203bc9_afb4_42cb_843d_c211490ce275.slice/crio-56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97f44787_3f37_44f1_85a5_4acffef71d95.slice/crio-c3997c6bebfa178d2d159b7c46082a54c7bc989ec2ee2d507189f6cfa3f09d57.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75e7b62f_8246_48b8_bcbb_d7c5129dd5e2.slice/crio-b4e72393e02c3e0619fa1e4bea6d0742ffbef4de8775e7d96ccdc8545af19acb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84203bc9_afb4_42cb_843d_c211490ce275.slice/crio-conmon-56d86156b326eeb21177e44e6d091888d2774d4f2e2a4d09d4994d48a8cc370e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92991ff_5b79_452a_b5ac_9dc90ab42f68.slice/crio-040b81795acd0bef7c76b7a99d650deaac66b5fa82f97baf669121be56928797\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice/crio-9e8bac5316b66891bb65008c7a82aba4d5b92fb6001c63ba8c49a06e95040b7f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92991ff_5b79_452a_b5ac_9dc90ab42f68.slice/crio-133ba33140e89ee163bac35cf527e591a8f3646d1451e448d0c71bf3a1966b21.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod807d8709_a403_4186_83f5_ec76aee793fe.slice/crio-4784748276da4d9a133003266f7562cd472e3126956926bf5055ab897b3f9fd0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75e7b62f_8246_48b8_bcbb_d7c5129dd5e2.slice/crio-conmon-b4e72393e02c3e0619fa1e4bea6d0742ffbef4de8775e7d96ccdc8545af19acb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92991ff_5b79_452a_b5ac_9dc90ab42f68.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5755369_fc75_443e_b608_996b7212ac94.slice/crio-f4f164366b57c885cd381de9f7095a47cecfe7af0c8f7b404360ca7dbbfa150d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97f44787_3f37_44f1_85a5_4acffef71d95.slice/crio-conmon-c3997c6bebfa178d2d159b7c46082a54c7bc989ec2ee2d507189f6cfa3f09d57.scope\": RecentStats: unable to find data in memory cache]" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.080578 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30f9b591-fea6-4010-99db-45eef2237cdc","Type":"ContainerStarted","Data":"3ab5021fa2dee4a0cbf054b6b79552974b77b39e6c35cbc24e07bc801848b48b"} Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.086262 4931 generic.go:334] "Generic (PLEG): container finished" podID="97f44787-3f37-44f1-85a5-4acffef71d95" containerID="4dc106a9347c18b30457fcfe0ba0955c89ca0037e4655a069609d973aaa2c8d2" exitCode=0 Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.086316 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97f44787-3f37-44f1-85a5-4acffef71d95","Type":"ContainerDied","Data":"4dc106a9347c18b30457fcfe0ba0955c89ca0037e4655a069609d973aaa2c8d2"} Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.087540 4931 generic.go:334] "Generic (PLEG): container finished" podID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerID="6b99333c2447cf347ded741a740d78480c65d8cae2c155c1e36dafd5c5578db3" exitCode=137 Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.087684 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af6bc53b-31f7-4650-aab3-d4bcf8b685ab","Type":"ContainerDied","Data":"6b99333c2447cf347ded741a740d78480c65d8cae2c155c1e36dafd5c5578db3"} Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.087841 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.135235 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.159472 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.185318 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.188547 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.190640 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.193205 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.193508 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.193625 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.219855 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.273808 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-scripts\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.273857 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.273893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.273917 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.273952 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-config-data\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.273967 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-log-httpd\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.274023 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8ww7\" (UniqueName: \"kubernetes.io/projected/177d0201-cde1-4aa2-8bcd-63ebade72464-kube-api-access-q8ww7\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.274052 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-run-httpd\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387408 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-scripts\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387474 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387511 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387537 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387577 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-config-data\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387592 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-log-httpd\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387646 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8ww7\" (UniqueName: \"kubernetes.io/projected/177d0201-cde1-4aa2-8bcd-63ebade72464-kube-api-access-q8ww7\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.387676 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-run-httpd\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.388141 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-run-httpd\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.392023 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-log-httpd\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.412072 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.416594 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.421051 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.424489 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-scripts\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.426512 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-config-data\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.444195 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.488803 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data-custom\") pod \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.488868 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data\") pod \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.488979 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m9bv\" (UniqueName: \"kubernetes.io/projected/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-kube-api-access-5m9bv\") pod \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.489052 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-combined-ca-bundle\") pod \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.489096 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-logs\") pod \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.489126 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-scripts\") pod \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.489210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-etc-machine-id\") pod \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\" (UID: \"af6bc53b-31f7-4650-aab3-d4bcf8b685ab\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.496043 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42548a3-5a7b-4f5b-8b13-8b5746710618" path="/var/lib/kubelet/pods/e42548a3-5a7b-4f5b-8b13-8b5746710618/volumes" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.508132 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "af6bc53b-31f7-4650-aab3-d4bcf8b685ab" (UID: "af6bc53b-31f7-4650-aab3-d4bcf8b685ab"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.508628 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-logs" (OuterVolumeSpecName: "logs") pod "af6bc53b-31f7-4650-aab3-d4bcf8b685ab" (UID: "af6bc53b-31f7-4650-aab3-d4bcf8b685ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.516652 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "af6bc53b-31f7-4650-aab3-d4bcf8b685ab" (UID: "af6bc53b-31f7-4650-aab3-d4bcf8b685ab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.516886 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-scripts" (OuterVolumeSpecName: "scripts") pod "af6bc53b-31f7-4650-aab3-d4bcf8b685ab" (UID: "af6bc53b-31f7-4650-aab3-d4bcf8b685ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.534454 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-kube-api-access-5m9bv" (OuterVolumeSpecName: "kube-api-access-5m9bv") pod "af6bc53b-31f7-4650-aab3-d4bcf8b685ab" (UID: "af6bc53b-31f7-4650-aab3-d4bcf8b685ab"). InnerVolumeSpecName "kube-api-access-5m9bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.564547 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8ww7\" (UniqueName: \"kubernetes.io/projected/177d0201-cde1-4aa2-8bcd-63ebade72464-kube-api-access-q8ww7\") pod \"ceilometer-0\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.572416 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af6bc53b-31f7-4650-aab3-d4bcf8b685ab" (UID: "af6bc53b-31f7-4650-aab3-d4bcf8b685ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.593564 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.593599 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m9bv\" (UniqueName: \"kubernetes.io/projected/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-kube-api-access-5m9bv\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.593611 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.593622 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.593731 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.593741 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.652374 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data" (OuterVolumeSpecName: "config-data") pod "af6bc53b-31f7-4650-aab3-d4bcf8b685ab" (UID: "af6bc53b-31f7-4650-aab3-d4bcf8b685ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.695332 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af6bc53b-31f7-4650-aab3-d4bcf8b685ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.723509 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vbzqc"] Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.754859 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797174 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn59m\" (UniqueName: \"kubernetes.io/projected/97f44787-3f37-44f1-85a5-4acffef71d95-kube-api-access-vn59m\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797477 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797542 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-combined-ca-bundle\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797638 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-scripts\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797691 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-httpd-run\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797715 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-internal-tls-certs\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797745 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-config-data\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.797797 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-logs\") pod \"97f44787-3f37-44f1-85a5-4acffef71d95\" (UID: \"97f44787-3f37-44f1-85a5-4acffef71d95\") " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.798026 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.798480 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.798489 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-logs" (OuterVolumeSpecName: "logs") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.805155 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.806508 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-scripts" (OuterVolumeSpecName: "scripts") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.811218 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f44787-3f37-44f1-85a5-4acffef71d95-kube-api-access-vn59m" (OuterVolumeSpecName: "kube-api-access-vn59m") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "kube-api-access-vn59m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.817951 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.900998 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f44787-3f37-44f1-85a5-4acffef71d95-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.901026 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn59m\" (UniqueName: \"kubernetes.io/projected/97f44787-3f37-44f1-85a5-4acffef71d95-kube-api-access-vn59m\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.901069 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.901082 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.910271 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-vfdzl"] Jan 30 05:27:09 crc kubenswrapper[4931]: W0130 05:27:09.925927 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod053ccacf_d473_49f5_89e5_545a753e5e03.slice/crio-b9fcfefa3cc95bbe91be11ad77ac98a7d6d76da884fe714257e2e61bdeb26830 WatchSource:0}: Error finding container b9fcfefa3cc95bbe91be11ad77ac98a7d6d76da884fe714257e2e61bdeb26830: Status 404 returned error can't find the container with id b9fcfefa3cc95bbe91be11ad77ac98a7d6d76da884fe714257e2e61bdeb26830 Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.930671 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0120-account-create-update-dptmf"] Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.933117 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.947994 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xvdtt"] Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.952745 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-config-data" (OuterVolumeSpecName: "config-data") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.960665 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.962636 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97f44787-3f37-44f1-85a5-4acffef71d95" (UID: "97f44787-3f37-44f1-85a5-4acffef71d95"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:09 crc kubenswrapper[4931]: I0130 05:27:09.978862 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x4mqp"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.008971 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.009015 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.009025 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.009036 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f44787-3f37-44f1-85a5-4acffef71d95-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.049991 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-rvcsw"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.105255 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" event={"ID":"053ccacf-d473-49f5-89e5-545a753e5e03","Type":"ContainerStarted","Data":"b9fcfefa3cc95bbe91be11ad77ac98a7d6d76da884fe714257e2e61bdeb26830"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.114258 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vbzqc" event={"ID":"6bdb7d70-31a9-4d52-aae0-072e8c62a23f","Type":"ContainerStarted","Data":"ed1045d9c4b634bdebcb19b30994b9f7ac39021883a8b98a833d09018502f440"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.114300 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vbzqc" event={"ID":"6bdb7d70-31a9-4d52-aae0-072e8c62a23f","Type":"ContainerStarted","Data":"39e1e0fee7385124dd916707bda2070cfe0dfef223110231a2b4c91e3fb436e2"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.123952 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97f44787-3f37-44f1-85a5-4acffef71d95","Type":"ContainerDied","Data":"7bfff4eea4487971b7e050b186c84e3209413100130292fb4b6aba07f7e36bce"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.123998 4931 scope.go:117] "RemoveContainer" containerID="4dc106a9347c18b30457fcfe0ba0955c89ca0037e4655a069609d973aaa2c8d2" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.124118 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.137979 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xvdtt" event={"ID":"d5cbb37a-882a-46cf-9cee-0543ac708004","Type":"ContainerStarted","Data":"a99db27aa7441abc06efd7c51c328c92f8d197643334e3cadd83bbe4996d30bd"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.143971 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vbzqc" podStartSLOduration=2.143951128 podStartE2EDuration="2.143951128s" podCreationTimestamp="2026-01-30 05:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:27:10.134055691 +0000 UTC m=+1165.503965948" watchObservedRunningTime="2026-01-30 05:27:10.143951128 +0000 UTC m=+1165.513861385" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.145206 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0120-account-create-update-dptmf" event={"ID":"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1","Type":"ContainerStarted","Data":"2faa6341fc48e403680c1a43938fac76f4ec0ce1bf1abf1d909499ba638bb1a3"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.148447 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x4mqp" event={"ID":"e8624816-8c2c-4d9c-b3a5-426253850926","Type":"ContainerStarted","Data":"6bf02e4c1dbd5fde8736e51c53a7aee2fa38184d7d077a5959c84e1d4d9b84d2"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.153328 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.153430 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af6bc53b-31f7-4650-aab3-d4bcf8b685ab","Type":"ContainerDied","Data":"9ea7e55f3db83940154f2b0bcb0d4ef000b7a1cdefa0062385e86b6b76cab2c2"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.155515 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30f9b591-fea6-4010-99db-45eef2237cdc","Type":"ContainerStarted","Data":"100081f00d3d095ca7d8dca6b7343ac8590f3de539067c314527dbcd86ceca1c"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.155722 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.160098 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-326d-account-create-update-rvcsw" event={"ID":"1ff3c7ac-e403-4826-bf45-a6bed05570b7","Type":"ContainerStarted","Data":"4719e62df3f7e4e05282f94f65b6783badfcadb12c381754281d462174b59154"} Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.177880 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.791152885 podStartE2EDuration="3.177844587s" podCreationTimestamp="2026-01-30 05:27:07 +0000 UTC" firstStartedPulling="2026-01-30 05:27:08.265186768 +0000 UTC m=+1163.635097045" lastFinishedPulling="2026-01-30 05:27:08.65187849 +0000 UTC m=+1164.021788747" observedRunningTime="2026-01-30 05:27:10.170041485 +0000 UTC m=+1165.539951742" watchObservedRunningTime="2026-01-30 05:27:10.177844587 +0000 UTC m=+1165.547754854" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.186352 4931 scope.go:117] "RemoveContainer" containerID="c3997c6bebfa178d2d159b7c46082a54c7bc989ec2ee2d507189f6cfa3f09d57" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.239499 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.243695 4931 scope.go:117] "RemoveContainer" containerID="6b99333c2447cf347ded741a740d78480c65d8cae2c155c1e36dafd5c5578db3" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.249954 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288248 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: E0130 05:27:10.288692 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-log" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288708 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-log" Jan 30 05:27:10 crc kubenswrapper[4931]: E0130 05:27:10.288718 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-httpd" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288726 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-httpd" Jan 30 05:27:10 crc kubenswrapper[4931]: E0130 05:27:10.288753 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api-log" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288760 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api-log" Jan 30 05:27:10 crc kubenswrapper[4931]: E0130 05:27:10.288779 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288786 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288971 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api-log" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288982 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-log" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.288993 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" containerName="glance-httpd" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.289017 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" containerName="cinder-api" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.289925 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.293184 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.293354 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.312696 4931 scope.go:117] "RemoveContainer" containerID="2a8b28eadcd454ca8adf3b36ea9153ce1d0f727ce7e3b65bd14a0471ebbea32f" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.323721 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.372311 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.382055 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.410276 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.411961 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.416407 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.418140 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.418305 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.418901 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.418937 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhps\" (UniqueName: \"kubernetes.io/projected/3415cfc4-a71a-4110-bf82-295181bb386f-kube-api-access-nnhps\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.419014 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.419035 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.419106 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.419125 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.419159 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.419211 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.419842 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.444489 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.521451 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-logs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.521844 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.521877 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.521900 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.521989 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zlj\" (UniqueName: \"kubernetes.io/projected/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-kube-api-access-64zlj\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522007 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522029 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnhps\" (UniqueName: \"kubernetes.io/projected/3415cfc4-a71a-4110-bf82-295181bb386f-kube-api-access-nnhps\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522057 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522078 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522102 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522147 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522164 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522190 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522209 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522230 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522249 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522268 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-scripts\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522380 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.522615 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.523852 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.528004 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.528311 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.528673 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.532823 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.543993 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnhps\" (UniqueName: \"kubernetes.io/projected/3415cfc4-a71a-4110-bf82-295181bb386f-kube-api-access-nnhps\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.572786 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.613665 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.624391 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-scripts\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.624458 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-logs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.624500 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.624525 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.624557 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zlj\" (UniqueName: \"kubernetes.io/projected/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-kube-api-access-64zlj\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.624983 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-logs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.625122 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.625201 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.625220 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.625244 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.626077 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.627723 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-scripts\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.629991 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data-custom\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.630992 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.631017 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.633532 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.633583 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.642087 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zlj\" (UniqueName: \"kubernetes.io/projected/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-kube-api-access-64zlj\") pod \"cinder-api-0\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.745161 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:27:10 crc kubenswrapper[4931]: I0130 05:27:10.971334 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.170410 4931 generic.go:334] "Generic (PLEG): container finished" podID="1ff3c7ac-e403-4826-bf45-a6bed05570b7" containerID="daeb4e60a2f2e8b0ecc5573dd48689c8e466dc66250fe49e905723d105d79613" exitCode=0 Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.170556 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-326d-account-create-update-rvcsw" event={"ID":"1ff3c7ac-e403-4826-bf45-a6bed05570b7","Type":"ContainerDied","Data":"daeb4e60a2f2e8b0ecc5573dd48689c8e466dc66250fe49e905723d105d79613"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.172804 4931 generic.go:334] "Generic (PLEG): container finished" podID="7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1" containerID="6c4ebb40e4402e95e337ac0e8eea0a4fb903b22dbcfc5ac614853d0c17f24e3a" exitCode=0 Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.172858 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0120-account-create-update-dptmf" event={"ID":"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1","Type":"ContainerDied","Data":"6c4ebb40e4402e95e337ac0e8eea0a4fb903b22dbcfc5ac614853d0c17f24e3a"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.174157 4931 generic.go:334] "Generic (PLEG): container finished" podID="6bdb7d70-31a9-4d52-aae0-072e8c62a23f" containerID="ed1045d9c4b634bdebcb19b30994b9f7ac39021883a8b98a833d09018502f440" exitCode=0 Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.174198 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vbzqc" event={"ID":"6bdb7d70-31a9-4d52-aae0-072e8c62a23f","Type":"ContainerDied","Data":"ed1045d9c4b634bdebcb19b30994b9f7ac39021883a8b98a833d09018502f440"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.176389 4931 generic.go:334] "Generic (PLEG): container finished" podID="d5cbb37a-882a-46cf-9cee-0543ac708004" containerID="edf9b3d1d8428caf5db14c3063b00d649e4d886f974003048a406d3bcf0b7c43" exitCode=0 Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.176439 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xvdtt" event={"ID":"d5cbb37a-882a-46cf-9cee-0543ac708004","Type":"ContainerDied","Data":"edf9b3d1d8428caf5db14c3063b00d649e4d886f974003048a406d3bcf0b7c43"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.184291 4931 generic.go:334] "Generic (PLEG): container finished" podID="e8624816-8c2c-4d9c-b3a5-426253850926" containerID="5712d27fd9c195ed4c35f4530c38c5e87c6a63708aedb0fa792d34d9e26a0b9a" exitCode=0 Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.184350 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x4mqp" event={"ID":"e8624816-8c2c-4d9c-b3a5-426253850926","Type":"ContainerDied","Data":"5712d27fd9c195ed4c35f4530c38c5e87c6a63708aedb0fa792d34d9e26a0b9a"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.191193 4931 generic.go:334] "Generic (PLEG): container finished" podID="053ccacf-d473-49f5-89e5-545a753e5e03" containerID="976d06480a8d07dd149684c2767dbf90e61f0fd7efbc4d623ba32e7d83fb861e" exitCode=0 Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.191244 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" event={"ID":"053ccacf-d473-49f5-89e5-545a753e5e03","Type":"ContainerDied","Data":"976d06480a8d07dd149684c2767dbf90e61f0fd7efbc4d623ba32e7d83fb861e"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.192544 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.197039 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerStarted","Data":"859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.197065 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerStarted","Data":"dff92e478717103c570a46fcba5be2f0a5365832852a9809f2694fb9464d3ab5"} Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.217065 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:27:11 crc kubenswrapper[4931]: W0130 05:27:11.218840 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc86b96a8_cd5c_4ea7_8a6f_5b3a4b2d923e.slice/crio-d58c6ad814535983bdd3740a7cd3d8c344b8ddf68658a3e7d51e045ec46e07d7 WatchSource:0}: Error finding container d58c6ad814535983bdd3740a7cd3d8c344b8ddf68658a3e7d51e045ec46e07d7: Status 404 returned error can't find the container with id d58c6ad814535983bdd3740a7cd3d8c344b8ddf68658a3e7d51e045ec46e07d7 Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.434677 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f44787-3f37-44f1-85a5-4acffef71d95" path="/var/lib/kubelet/pods/97f44787-3f37-44f1-85a5-4acffef71d95/volumes" Jan 30 05:27:11 crc kubenswrapper[4931]: I0130 05:27:11.435403 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6bc53b-31f7-4650-aab3-d4bcf8b685ab" path="/var/lib/kubelet/pods/af6bc53b-31f7-4650-aab3-d4bcf8b685ab/volumes" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.235313 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerStarted","Data":"a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf"} Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.238965 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3415cfc4-a71a-4110-bf82-295181bb386f","Type":"ContainerStarted","Data":"3795752efe01b170e153d47107ec186f09220104cdec5c71d0a210a63580f858"} Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.239196 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3415cfc4-a71a-4110-bf82-295181bb386f","Type":"ContainerStarted","Data":"8686488d53f891915ba13840ec460659816d6140e0778cc81ec5034b3206cf0a"} Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.241488 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e","Type":"ContainerStarted","Data":"c948d726013eb4e8273ef998118172023cae6536ca99db77a7f1ebd4884def12"} Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.241545 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e","Type":"ContainerStarted","Data":"d58c6ad814535983bdd3740a7cd3d8c344b8ddf68658a3e7d51e045ec46e07d7"} Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.706695 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.778091 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff3c7ac-e403-4826-bf45-a6bed05570b7-operator-scripts\") pod \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.778199 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz255\" (UniqueName: \"kubernetes.io/projected/1ff3c7ac-e403-4826-bf45-a6bed05570b7-kube-api-access-xz255\") pod \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\" (UID: \"1ff3c7ac-e403-4826-bf45-a6bed05570b7\") " Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.778629 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff3c7ac-e403-4826-bf45-a6bed05570b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ff3c7ac-e403-4826-bf45-a6bed05570b7" (UID: "1ff3c7ac-e403-4826-bf45-a6bed05570b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.792064 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff3c7ac-e403-4826-bf45-a6bed05570b7-kube-api-access-xz255" (OuterVolumeSpecName: "kube-api-access-xz255") pod "1ff3c7ac-e403-4826-bf45-a6bed05570b7" (UID: "1ff3c7ac-e403-4826-bf45-a6bed05570b7"). InnerVolumeSpecName "kube-api-access-xz255". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.879752 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz255\" (UniqueName: \"kubernetes.io/projected/1ff3c7ac-e403-4826-bf45-a6bed05570b7-kube-api-access-xz255\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.879780 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff3c7ac-e403-4826-bf45-a6bed05570b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.911373 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.916459 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.925136 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.942386 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.957222 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.980356 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-operator-scripts\") pod \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.980841 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlfnc\" (UniqueName: \"kubernetes.io/projected/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-kube-api-access-jlfnc\") pod \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\" (UID: \"6bdb7d70-31a9-4d52-aae0-072e8c62a23f\") " Jan 30 05:27:12 crc kubenswrapper[4931]: I0130 05:27:12.983892 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bdb7d70-31a9-4d52-aae0-072e8c62a23f" (UID: "6bdb7d70-31a9-4d52-aae0-072e8c62a23f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.005533 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-kube-api-access-jlfnc" (OuterVolumeSpecName: "kube-api-access-jlfnc") pod "6bdb7d70-31a9-4d52-aae0-072e8c62a23f" (UID: "6bdb7d70-31a9-4d52-aae0-072e8c62a23f"). InnerVolumeSpecName "kube-api-access-jlfnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.084990 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053ccacf-d473-49f5-89e5-545a753e5e03-operator-scripts\") pod \"053ccacf-d473-49f5-89e5-545a753e5e03\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085058 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cbb37a-882a-46cf-9cee-0543ac708004-operator-scripts\") pod \"d5cbb37a-882a-46cf-9cee-0543ac708004\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085114 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2bbp\" (UniqueName: \"kubernetes.io/projected/e8624816-8c2c-4d9c-b3a5-426253850926-kube-api-access-b2bbp\") pod \"e8624816-8c2c-4d9c-b3a5-426253850926\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085134 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5hz\" (UniqueName: \"kubernetes.io/projected/d5cbb37a-882a-46cf-9cee-0543ac708004-kube-api-access-kt5hz\") pod \"d5cbb37a-882a-46cf-9cee-0543ac708004\" (UID: \"d5cbb37a-882a-46cf-9cee-0543ac708004\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085216 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkdss\" (UniqueName: \"kubernetes.io/projected/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-kube-api-access-mkdss\") pod \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085255 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8624816-8c2c-4d9c-b3a5-426253850926-operator-scripts\") pod \"e8624816-8c2c-4d9c-b3a5-426253850926\" (UID: \"e8624816-8c2c-4d9c-b3a5-426253850926\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085272 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89msj\" (UniqueName: \"kubernetes.io/projected/053ccacf-d473-49f5-89e5-545a753e5e03-kube-api-access-89msj\") pod \"053ccacf-d473-49f5-89e5-545a753e5e03\" (UID: \"053ccacf-d473-49f5-89e5-545a753e5e03\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-operator-scripts\") pod \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\" (UID: \"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1\") " Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085674 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlfnc\" (UniqueName: \"kubernetes.io/projected/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-kube-api-access-jlfnc\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.085692 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bdb7d70-31a9-4d52-aae0-072e8c62a23f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.087782 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053ccacf-d473-49f5-89e5-545a753e5e03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "053ccacf-d473-49f5-89e5-545a753e5e03" (UID: "053ccacf-d473-49f5-89e5-545a753e5e03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.088036 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cbb37a-882a-46cf-9cee-0543ac708004-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5cbb37a-882a-46cf-9cee-0543ac708004" (UID: "d5cbb37a-882a-46cf-9cee-0543ac708004"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.088645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1" (UID: "7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.088959 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8624816-8c2c-4d9c-b3a5-426253850926-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8624816-8c2c-4d9c-b3a5-426253850926" (UID: "e8624816-8c2c-4d9c-b3a5-426253850926"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.093621 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-kube-api-access-mkdss" (OuterVolumeSpecName: "kube-api-access-mkdss") pod "7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1" (UID: "7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1"). InnerVolumeSpecName "kube-api-access-mkdss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.094784 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cbb37a-882a-46cf-9cee-0543ac708004-kube-api-access-kt5hz" (OuterVolumeSpecName: "kube-api-access-kt5hz") pod "d5cbb37a-882a-46cf-9cee-0543ac708004" (UID: "d5cbb37a-882a-46cf-9cee-0543ac708004"). InnerVolumeSpecName "kube-api-access-kt5hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.096360 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053ccacf-d473-49f5-89e5-545a753e5e03-kube-api-access-89msj" (OuterVolumeSpecName: "kube-api-access-89msj") pod "053ccacf-d473-49f5-89e5-545a753e5e03" (UID: "053ccacf-d473-49f5-89e5-545a753e5e03"). InnerVolumeSpecName "kube-api-access-89msj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.096511 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8624816-8c2c-4d9c-b3a5-426253850926-kube-api-access-b2bbp" (OuterVolumeSpecName: "kube-api-access-b2bbp") pod "e8624816-8c2c-4d9c-b3a5-426253850926" (UID: "e8624816-8c2c-4d9c-b3a5-426253850926"). InnerVolumeSpecName "kube-api-access-b2bbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187220 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89msj\" (UniqueName: \"kubernetes.io/projected/053ccacf-d473-49f5-89e5-545a753e5e03-kube-api-access-89msj\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187254 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187263 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/053ccacf-d473-49f5-89e5-545a753e5e03-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187272 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cbb37a-882a-46cf-9cee-0543ac708004-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187281 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2bbp\" (UniqueName: \"kubernetes.io/projected/e8624816-8c2c-4d9c-b3a5-426253850926-kube-api-access-b2bbp\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187290 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5hz\" (UniqueName: \"kubernetes.io/projected/d5cbb37a-882a-46cf-9cee-0543ac708004-kube-api-access-kt5hz\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187298 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkdss\" (UniqueName: \"kubernetes.io/projected/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1-kube-api-access-mkdss\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.187306 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8624816-8c2c-4d9c-b3a5-426253850926-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.249842 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerStarted","Data":"db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.251599 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e","Type":"ContainerStarted","Data":"2823dcc09d156bc746ffbc3ab196c3d6e136f453bf377837d7cce10861a168f4"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.252676 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.253931 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0120-account-create-update-dptmf" event={"ID":"7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1","Type":"ContainerDied","Data":"2faa6341fc48e403680c1a43938fac76f4ec0ce1bf1abf1d909499ba638bb1a3"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.253953 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2faa6341fc48e403680c1a43938fac76f4ec0ce1bf1abf1d909499ba638bb1a3" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.254003 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-dptmf" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.258578 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.258571 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10f6-account-create-update-vfdzl" event={"ID":"053ccacf-d473-49f5-89e5-545a753e5e03","Type":"ContainerDied","Data":"b9fcfefa3cc95bbe91be11ad77ac98a7d6d76da884fe714257e2e61bdeb26830"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.258705 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9fcfefa3cc95bbe91be11ad77ac98a7d6d76da884fe714257e2e61bdeb26830" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.260097 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vbzqc" event={"ID":"6bdb7d70-31a9-4d52-aae0-072e8c62a23f","Type":"ContainerDied","Data":"39e1e0fee7385124dd916707bda2070cfe0dfef223110231a2b4c91e3fb436e2"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.260132 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e1e0fee7385124dd916707bda2070cfe0dfef223110231a2b4c91e3fb436e2" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.260143 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vbzqc" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.269789 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xvdtt" event={"ID":"d5cbb37a-882a-46cf-9cee-0543ac708004","Type":"ContainerDied","Data":"a99db27aa7441abc06efd7c51c328c92f8d197643334e3cadd83bbe4996d30bd"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.269837 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a99db27aa7441abc06efd7c51c328c92f8d197643334e3cadd83bbe4996d30bd" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.269945 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xvdtt" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.273707 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3415cfc4-a71a-4110-bf82-295181bb386f","Type":"ContainerStarted","Data":"cc448c5e4a9d4def969b75156b3cc39bbccbe47f49a05ef9d15592b4643a809f"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.276829 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.276812127 podStartE2EDuration="3.276812127s" podCreationTimestamp="2026-01-30 05:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:27:13.272878115 +0000 UTC m=+1168.642788372" watchObservedRunningTime="2026-01-30 05:27:13.276812127 +0000 UTC m=+1168.646722384" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.279241 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-326d-account-create-update-rvcsw" event={"ID":"1ff3c7ac-e403-4826-bf45-a6bed05570b7","Type":"ContainerDied","Data":"4719e62df3f7e4e05282f94f65b6783badfcadb12c381754281d462174b59154"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.279284 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4719e62df3f7e4e05282f94f65b6783badfcadb12c381754281d462174b59154" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.279361 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-rvcsw" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.286025 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x4mqp" event={"ID":"e8624816-8c2c-4d9c-b3a5-426253850926","Type":"ContainerDied","Data":"6bf02e4c1dbd5fde8736e51c53a7aee2fa38184d7d077a5959c84e1d4d9b84d2"} Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.286063 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bf02e4c1dbd5fde8736e51c53a7aee2fa38184d7d077a5959c84e1d4d9b84d2" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.286127 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x4mqp" Jan 30 05:27:13 crc kubenswrapper[4931]: I0130 05:27:13.295195 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.295181114 podStartE2EDuration="3.295181114s" podCreationTimestamp="2026-01-30 05:27:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:27:13.293058289 +0000 UTC m=+1168.662968546" watchObservedRunningTime="2026-01-30 05:27:13.295181114 +0000 UTC m=+1168.665091371" Jan 30 05:27:14 crc kubenswrapper[4931]: I0130 05:27:14.887091 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:27:14 crc kubenswrapper[4931]: I0130 05:27:14.887862 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-log" containerID="cri-o://d2ded790b556dc13af017d23c970f8fe6d49472a2741355949522d19b2e1e389" gracePeriod=30 Jan 30 05:27:14 crc kubenswrapper[4931]: I0130 05:27:14.887945 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-httpd" containerID="cri-o://6f20ab78e04ca2466a780c5cc51a4b37e0f487abee57f4e067c29bab7787be5d" gracePeriod=30 Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.303306 4931 generic.go:334] "Generic (PLEG): container finished" podID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerID="d2ded790b556dc13af017d23c970f8fe6d49472a2741355949522d19b2e1e389" exitCode=143 Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.303382 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18f01f64-f6e4-42f3-80f8-27c86f82eeef","Type":"ContainerDied","Data":"d2ded790b556dc13af017d23c970f8fe6d49472a2741355949522d19b2e1e389"} Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.306010 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerStarted","Data":"f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658"} Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.306204 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-central-agent" containerID="cri-o://859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb" gracePeriod=30 Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.306889 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="proxy-httpd" containerID="cri-o://f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658" gracePeriod=30 Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.306967 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="sg-core" containerID="cri-o://db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124" gracePeriod=30 Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.307028 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-notification-agent" containerID="cri-o://a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf" gracePeriod=30 Jan 30 05:27:15 crc kubenswrapper[4931]: I0130 05:27:15.336984 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.407954608 podStartE2EDuration="6.336958589s" podCreationTimestamp="2026-01-30 05:27:09 +0000 UTC" firstStartedPulling="2026-01-30 05:27:10.456035924 +0000 UTC m=+1165.825946191" lastFinishedPulling="2026-01-30 05:27:14.385039905 +0000 UTC m=+1169.754950172" observedRunningTime="2026-01-30 05:27:15.330127891 +0000 UTC m=+1170.700038148" watchObservedRunningTime="2026-01-30 05:27:15.336958589 +0000 UTC m=+1170.706868846" Jan 30 05:27:16 crc kubenswrapper[4931]: I0130 05:27:16.320681 4931 generic.go:334] "Generic (PLEG): container finished" podID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerID="f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658" exitCode=0 Jan 30 05:27:16 crc kubenswrapper[4931]: I0130 05:27:16.321066 4931 generic.go:334] "Generic (PLEG): container finished" podID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerID="db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124" exitCode=2 Jan 30 05:27:16 crc kubenswrapper[4931]: I0130 05:27:16.321086 4931 generic.go:334] "Generic (PLEG): container finished" podID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerID="a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf" exitCode=0 Jan 30 05:27:16 crc kubenswrapper[4931]: I0130 05:27:16.320751 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerDied","Data":"f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658"} Jan 30 05:27:16 crc kubenswrapper[4931]: I0130 05:27:16.321144 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerDied","Data":"db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124"} Jan 30 05:27:16 crc kubenswrapper[4931]: I0130 05:27:16.321173 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerDied","Data":"a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf"} Jan 30 05:27:17 crc kubenswrapper[4931]: I0130 05:27:17.766381 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.311022 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.345686 4931 generic.go:334] "Generic (PLEG): container finished" podID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerID="859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb" exitCode=0 Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.345765 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerDied","Data":"859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb"} Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.345836 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"177d0201-cde1-4aa2-8bcd-63ebade72464","Type":"ContainerDied","Data":"dff92e478717103c570a46fcba5be2f0a5365832852a9809f2694fb9464d3ab5"} Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.345862 4931 scope.go:117] "RemoveContainer" containerID="f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.346070 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.355033 4931 generic.go:334] "Generic (PLEG): container finished" podID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerID="6f20ab78e04ca2466a780c5cc51a4b37e0f487abee57f4e067c29bab7787be5d" exitCode=0 Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.355087 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18f01f64-f6e4-42f3-80f8-27c86f82eeef","Type":"ContainerDied","Data":"6f20ab78e04ca2466a780c5cc51a4b37e0f487abee57f4e067c29bab7787be5d"} Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.370984 4931 scope.go:117] "RemoveContainer" containerID="db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390230 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-scripts\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390279 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-run-httpd\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390302 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8ww7\" (UniqueName: \"kubernetes.io/projected/177d0201-cde1-4aa2-8bcd-63ebade72464-kube-api-access-q8ww7\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390321 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-config-data\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390389 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-combined-ca-bundle\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390542 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-ceilometer-tls-certs\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390580 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-sg-core-conf-yaml\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.390651 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-log-httpd\") pod \"177d0201-cde1-4aa2-8bcd-63ebade72464\" (UID: \"177d0201-cde1-4aa2-8bcd-63ebade72464\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.391503 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.392267 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.398407 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177d0201-cde1-4aa2-8bcd-63ebade72464-kube-api-access-q8ww7" (OuterVolumeSpecName: "kube-api-access-q8ww7") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "kube-api-access-q8ww7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.398966 4931 scope.go:117] "RemoveContainer" containerID="a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.399670 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-scripts" (OuterVolumeSpecName: "scripts") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.455213 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.469919 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.471463 4931 scope.go:117] "RemoveContainer" containerID="859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.488630 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.492859 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.492886 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.492895 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/177d0201-cde1-4aa2-8bcd-63ebade72464-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.492904 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8ww7\" (UniqueName: \"kubernetes.io/projected/177d0201-cde1-4aa2-8bcd-63ebade72464-kube-api-access-q8ww7\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.492913 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.492921 4931 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.492929 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.505617 4931 scope.go:117] "RemoveContainer" containerID="f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.506378 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658\": container with ID starting with f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658 not found: ID does not exist" containerID="f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.506442 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658"} err="failed to get container status \"f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658\": rpc error: code = NotFound desc = could not find container \"f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658\": container with ID starting with f235608970e9f08a99e6669a08f94081a8924b432253a91470fc894cf56a6658 not found: ID does not exist" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.506472 4931 scope.go:117] "RemoveContainer" containerID="db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.507312 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124\": container with ID starting with db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124 not found: ID does not exist" containerID="db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.507383 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124"} err="failed to get container status \"db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124\": rpc error: code = NotFound desc = could not find container \"db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124\": container with ID starting with db0762eac4cfade11ae03279b73a2bf8be966b75fd0991edf7a0fdb9c62c8124 not found: ID does not exist" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.507444 4931 scope.go:117] "RemoveContainer" containerID="a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.507759 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf\": container with ID starting with a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf not found: ID does not exist" containerID="a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.507790 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf"} err="failed to get container status \"a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf\": rpc error: code = NotFound desc = could not find container \"a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf\": container with ID starting with a3df13b4a30c3cc8048c90cfd2f794f41bd5d1b0a6a65d81540d524792930eaf not found: ID does not exist" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.507807 4931 scope.go:117] "RemoveContainer" containerID="859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.508075 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb\": container with ID starting with 859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb not found: ID does not exist" containerID="859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.508120 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb"} err="failed to get container status \"859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb\": rpc error: code = NotFound desc = could not find container \"859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb\": container with ID starting with 859b0155d8a411e508b48bc6436155827f71dbfc993b9a213b9b2f60df1f9abb not found: ID does not exist" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.510733 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-config-data" (OuterVolumeSpecName: "config-data") pod "177d0201-cde1-4aa2-8bcd-63ebade72464" (UID: "177d0201-cde1-4aa2-8bcd-63ebade72464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.532771 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598201 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-scripts\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598272 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598314 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-public-tls-certs\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598385 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-logs\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598403 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-combined-ca-bundle\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598474 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-config-data\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598506 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czq79\" (UniqueName: \"kubernetes.io/projected/18f01f64-f6e4-42f3-80f8-27c86f82eeef-kube-api-access-czq79\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598589 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-httpd-run\") pod \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\" (UID: \"18f01f64-f6e4-42f3-80f8-27c86f82eeef\") " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.598944 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/177d0201-cde1-4aa2-8bcd-63ebade72464-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.599538 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-logs" (OuterVolumeSpecName: "logs") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.603559 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.604500 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-scripts" (OuterVolumeSpecName: "scripts") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.607598 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f01f64-f6e4-42f3-80f8-27c86f82eeef-kube-api-access-czq79" (OuterVolumeSpecName: "kube-api-access-czq79") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "kube-api-access-czq79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.633748 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.636602 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.669562 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-config-data" (OuterVolumeSpecName: "config-data") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.700588 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "18f01f64-f6e4-42f3-80f8-27c86f82eeef" (UID: "18f01f64-f6e4-42f3-80f8-27c86f82eeef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709183 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709227 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709269 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709280 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709289 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czq79\" (UniqueName: \"kubernetes.io/projected/18f01f64-f6e4-42f3-80f8-27c86f82eeef-kube-api-access-czq79\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709302 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/18f01f64-f6e4-42f3-80f8-27c86f82eeef-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709310 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18f01f64-f6e4-42f3-80f8-27c86f82eeef-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.709369 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.759239 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.793933 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.811283 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.811318 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.817662 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.817988 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bdb7d70-31a9-4d52-aae0-072e8c62a23f" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818004 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bdb7d70-31a9-4d52-aae0-072e8c62a23f" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818015 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff3c7ac-e403-4826-bf45-a6bed05570b7" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818021 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff3c7ac-e403-4826-bf45-a6bed05570b7" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818031 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-central-agent" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818037 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-central-agent" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818052 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-httpd" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818058 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-httpd" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818069 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-log" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818075 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-log" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818083 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-notification-agent" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818089 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-notification-agent" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818099 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8624816-8c2c-4d9c-b3a5-426253850926" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818104 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8624816-8c2c-4d9c-b3a5-426253850926" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818208 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053ccacf-d473-49f5-89e5-545a753e5e03" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818214 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="053ccacf-d473-49f5-89e5-545a753e5e03" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818225 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="sg-core" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818230 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="sg-core" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818245 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cbb37a-882a-46cf-9cee-0543ac708004" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818253 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cbb37a-882a-46cf-9cee-0543ac708004" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818268 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="proxy-httpd" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818275 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="proxy-httpd" Jan 30 05:27:18 crc kubenswrapper[4931]: E0130 05:27:18.818290 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818296 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818513 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-central-agent" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818526 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="sg-core" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818537 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="proxy-httpd" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818550 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="053ccacf-d473-49f5-89e5-545a753e5e03" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818562 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff3c7ac-e403-4826-bf45-a6bed05570b7" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818569 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1" containerName="mariadb-account-create-update" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818578 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8624816-8c2c-4d9c-b3a5-426253850926" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818588 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cbb37a-882a-46cf-9cee-0543ac708004" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818598 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" containerName="ceilometer-notification-agent" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818609 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-httpd" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818617 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" containerName="glance-log" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.818623 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bdb7d70-31a9-4d52-aae0-072e8c62a23f" containerName="mariadb-database-create" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.820380 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.822043 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.822573 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.826089 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.827634 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912524 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-config-data\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912597 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912709 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9n8v\" (UniqueName: \"kubernetes.io/projected/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-kube-api-access-f9n8v\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912735 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912751 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-scripts\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912821 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912877 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:18 crc kubenswrapper[4931]: I0130 05:27:18.912892 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015129 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015253 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9n8v\" (UniqueName: \"kubernetes.io/projected/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-kube-api-access-f9n8v\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015305 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015343 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-scripts\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015460 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015520 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.015615 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-config-data\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.016023 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.016129 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.020558 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-config-data\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.020737 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.022409 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.025719 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-scripts\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.033232 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.038284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9n8v\" (UniqueName: \"kubernetes.io/projected/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-kube-api-access-f9n8v\") pod \"ceilometer-0\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.063165 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s287f"] Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.065389 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.067878 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5k4cm" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.068393 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.068633 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.076212 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s287f"] Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.134713 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.218830 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.219076 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-scripts\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.219151 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzrx\" (UniqueName: \"kubernetes.io/projected/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-kube-api-access-trzrx\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.219257 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-config-data\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.321005 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-scripts\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.321246 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trzrx\" (UniqueName: \"kubernetes.io/projected/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-kube-api-access-trzrx\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.321278 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-config-data\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.321327 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.327266 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-scripts\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.327498 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-config-data\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.336416 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trzrx\" (UniqueName: \"kubernetes.io/projected/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-kube-api-access-trzrx\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.371406 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-s287f\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.385795 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"18f01f64-f6e4-42f3-80f8-27c86f82eeef","Type":"ContainerDied","Data":"f169988e956408b39f47bea60212630dcedf5b4c3315a89463a6589988357590"} Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.385839 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.385855 4931 scope.go:117] "RemoveContainer" containerID="6f20ab78e04ca2466a780c5cc51a4b37e0f487abee57f4e067c29bab7787be5d" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.412537 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.420710 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.431396 4931 scope.go:117] "RemoveContainer" containerID="d2ded790b556dc13af017d23c970f8fe6d49472a2741355949522d19b2e1e389" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.437459 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177d0201-cde1-4aa2-8bcd-63ebade72464" path="/var/lib/kubelet/pods/177d0201-cde1-4aa2-8bcd-63ebade72464/volumes" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.438117 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.451509 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.453062 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.455560 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.456071 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.465329 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524311 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524412 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524507 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524538 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-logs\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524626 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.524657 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpkl\" (UniqueName: \"kubernetes.io/projected/7c0ddaec-4521-4898-8649-262b52f24acb-kube-api-access-pmpkl\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.626521 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.626814 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.626838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-logs\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.626857 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.626878 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpkl\" (UniqueName: \"kubernetes.io/projected/7c0ddaec-4521-4898-8649-262b52f24acb-kube-api-access-pmpkl\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.626921 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.626990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.627014 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.627380 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.634490 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.635117 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-logs\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.640529 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.640836 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-config-data\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.643162 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.643622 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-scripts\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.652790 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpkl\" (UniqueName: \"kubernetes.io/projected/7c0ddaec-4521-4898-8649-262b52f24acb-kube-api-access-pmpkl\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.673905 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:19 crc kubenswrapper[4931]: W0130 05:27:19.675009 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c888ca_c1fd_452f_9fd2_ff821f4b18e5.slice/crio-5e4a201ab16bebe12731751910597ba0cdd866af8fc6c845f3bcb4b6eb6b4330 WatchSource:0}: Error finding container 5e4a201ab16bebe12731751910597ba0cdd866af8fc6c845f3bcb4b6eb6b4330: Status 404 returned error can't find the container with id 5e4a201ab16bebe12731751910597ba0cdd866af8fc6c845f3bcb4b6eb6b4330 Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.677287 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.783737 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:27:19 crc kubenswrapper[4931]: I0130 05:27:19.882588 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s287f"] Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.333357 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:27:20 crc kubenswrapper[4931]: W0130 05:27:20.334217 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c0ddaec_4521_4898_8649_262b52f24acb.slice/crio-f1e40e63465f32ce48c188f63cada07df803d2d0b29cf0b23188f72f3a13a25b WatchSource:0}: Error finding container f1e40e63465f32ce48c188f63cada07df803d2d0b29cf0b23188f72f3a13a25b: Status 404 returned error can't find the container with id f1e40e63465f32ce48c188f63cada07df803d2d0b29cf0b23188f72f3a13a25b Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.411256 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s287f" event={"ID":"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b","Type":"ContainerStarted","Data":"33ba5db3481aa96c9b6d1d5ea1daf2013941f187605ad329f6f6d5a0b2ba2f94"} Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.414386 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c0ddaec-4521-4898-8649-262b52f24acb","Type":"ContainerStarted","Data":"f1e40e63465f32ce48c188f63cada07df803d2d0b29cf0b23188f72f3a13a25b"} Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.418406 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerStarted","Data":"c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6"} Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.418506 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerStarted","Data":"5e4a201ab16bebe12731751910597ba0cdd866af8fc6c845f3bcb4b6eb6b4330"} Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.614254 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.614300 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.667295 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:20 crc kubenswrapper[4931]: I0130 05:27:20.677056 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:21 crc kubenswrapper[4931]: I0130 05:27:21.435136 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f01f64-f6e4-42f3-80f8-27c86f82eeef" path="/var/lib/kubelet/pods/18f01f64-f6e4-42f3-80f8-27c86f82eeef/volumes" Jan 30 05:27:21 crc kubenswrapper[4931]: I0130 05:27:21.457799 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerStarted","Data":"fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb"} Jan 30 05:27:21 crc kubenswrapper[4931]: I0130 05:27:21.468121 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c0ddaec-4521-4898-8649-262b52f24acb","Type":"ContainerStarted","Data":"754804bc268dc311547eddc996a035b132392b26798898d2ba034bc32dc1ee16"} Jan 30 05:27:21 crc kubenswrapper[4931]: I0130 05:27:21.468546 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:21 crc kubenswrapper[4931]: I0130 05:27:21.468566 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:22 crc kubenswrapper[4931]: I0130 05:27:22.481814 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c0ddaec-4521-4898-8649-262b52f24acb","Type":"ContainerStarted","Data":"3509d69982e816f9732671b256bbf363b32c9a199362011499a8607bf3a6e808"} Jan 30 05:27:22 crc kubenswrapper[4931]: I0130 05:27:22.485632 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerStarted","Data":"1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834"} Jan 30 05:27:22 crc kubenswrapper[4931]: I0130 05:27:22.503407 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.503386991 podStartE2EDuration="3.503386991s" podCreationTimestamp="2026-01-30 05:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:27:22.497472407 +0000 UTC m=+1177.867382664" watchObservedRunningTime="2026-01-30 05:27:22.503386991 +0000 UTC m=+1177.873297258" Jan 30 05:27:23 crc kubenswrapper[4931]: I0130 05:27:23.016623 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 05:27:23 crc kubenswrapper[4931]: I0130 05:27:23.241124 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:23 crc kubenswrapper[4931]: I0130 05:27:23.568193 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:23 crc kubenswrapper[4931]: I0130 05:27:23.568560 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:27:24 crc kubenswrapper[4931]: I0130 05:27:24.108217 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 05:27:24 crc kubenswrapper[4931]: I0130 05:27:24.508495 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-central-agent" containerID="cri-o://c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6" gracePeriod=30 Jan 30 05:27:24 crc kubenswrapper[4931]: I0130 05:27:24.508852 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerStarted","Data":"3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133"} Jan 30 05:27:24 crc kubenswrapper[4931]: I0130 05:27:24.508932 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 05:27:24 crc kubenswrapper[4931]: I0130 05:27:24.509198 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="proxy-httpd" containerID="cri-o://3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133" gracePeriod=30 Jan 30 05:27:24 crc kubenswrapper[4931]: I0130 05:27:24.509283 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="sg-core" containerID="cri-o://1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834" gracePeriod=30 Jan 30 05:27:24 crc kubenswrapper[4931]: I0130 05:27:24.509320 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-notification-agent" containerID="cri-o://fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb" gracePeriod=30 Jan 30 05:27:25 crc kubenswrapper[4931]: I0130 05:27:25.448714 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.830387463 podStartE2EDuration="7.448699274s" podCreationTimestamp="2026-01-30 05:27:18 +0000 UTC" firstStartedPulling="2026-01-30 05:27:19.678846081 +0000 UTC m=+1175.048756338" lastFinishedPulling="2026-01-30 05:27:23.297157892 +0000 UTC m=+1178.667068149" observedRunningTime="2026-01-30 05:27:24.538039871 +0000 UTC m=+1179.907950128" watchObservedRunningTime="2026-01-30 05:27:25.448699274 +0000 UTC m=+1180.818609531" Jan 30 05:27:25 crc kubenswrapper[4931]: I0130 05:27:25.519559 4931 generic.go:334] "Generic (PLEG): container finished" podID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerID="3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133" exitCode=0 Jan 30 05:27:25 crc kubenswrapper[4931]: I0130 05:27:25.519590 4931 generic.go:334] "Generic (PLEG): container finished" podID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerID="1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834" exitCode=2 Jan 30 05:27:25 crc kubenswrapper[4931]: I0130 05:27:25.519597 4931 generic.go:334] "Generic (PLEG): container finished" podID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerID="fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb" exitCode=0 Jan 30 05:27:25 crc kubenswrapper[4931]: I0130 05:27:25.519618 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerDied","Data":"3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133"} Jan 30 05:27:25 crc kubenswrapper[4931]: I0130 05:27:25.519650 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerDied","Data":"1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834"} Jan 30 05:27:25 crc kubenswrapper[4931]: I0130 05:27:25.519660 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerDied","Data":"fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb"} Jan 30 05:27:29 crc kubenswrapper[4931]: I0130 05:27:29.784965 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 05:27:29 crc kubenswrapper[4931]: I0130 05:27:29.785391 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 05:27:29 crc kubenswrapper[4931]: I0130 05:27:29.820686 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 05:27:29 crc kubenswrapper[4931]: I0130 05:27:29.876159 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.518968 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.562546 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s287f" event={"ID":"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b","Type":"ContainerStarted","Data":"c2771265ae8a990e0e69c0f116c64cc25eecd94f6e185173fb3394e2e6fbe468"} Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.566921 4931 generic.go:334] "Generic (PLEG): container finished" podID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerID="c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6" exitCode=0 Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.567713 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.567735 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerDied","Data":"c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6"} Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.567795 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5","Type":"ContainerDied","Data":"5e4a201ab16bebe12731751910597ba0cdd866af8fc6c845f3bcb4b6eb6b4330"} Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.567814 4931 scope.go:117] "RemoveContainer" containerID="3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.568096 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.570507 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.592948 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-s287f" podStartSLOduration=1.618121457 podStartE2EDuration="11.59292611s" podCreationTimestamp="2026-01-30 05:27:19 +0000 UTC" firstStartedPulling="2026-01-30 05:27:19.899593417 +0000 UTC m=+1175.269503674" lastFinishedPulling="2026-01-30 05:27:29.87439807 +0000 UTC m=+1185.244308327" observedRunningTime="2026-01-30 05:27:30.584699156 +0000 UTC m=+1185.954609413" watchObservedRunningTime="2026-01-30 05:27:30.59292611 +0000 UTC m=+1185.962836367" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.595618 4931 scope.go:117] "RemoveContainer" containerID="1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.619725 4931 scope.go:117] "RemoveContainer" containerID="fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.632734 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-run-httpd\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.632809 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-log-httpd\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.632871 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-config-data\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.632907 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-scripts\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.632937 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-sg-core-conf-yaml\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.632995 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-ceilometer-tls-certs\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.633070 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-combined-ca-bundle\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.633108 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9n8v\" (UniqueName: \"kubernetes.io/projected/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-kube-api-access-f9n8v\") pod \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\" (UID: \"a8c888ca-c1fd-452f-9fd2-ff821f4b18e5\") " Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.637689 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.638174 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.644512 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-kube-api-access-f9n8v" (OuterVolumeSpecName: "kube-api-access-f9n8v") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "kube-api-access-f9n8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.647812 4931 scope.go:117] "RemoveContainer" containerID="c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.650504 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-scripts" (OuterVolumeSpecName: "scripts") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.677658 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.683033 4931 scope.go:117] "RemoveContainer" containerID="3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133" Jan 30 05:27:30 crc kubenswrapper[4931]: E0130 05:27:30.683494 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133\": container with ID starting with 3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133 not found: ID does not exist" containerID="3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.683550 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133"} err="failed to get container status \"3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133\": rpc error: code = NotFound desc = could not find container \"3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133\": container with ID starting with 3d1af17ef4276356fc20e1e86fb994d75422189cd3d057d792ef2830cd3e7133 not found: ID does not exist" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.683687 4931 scope.go:117] "RemoveContainer" containerID="1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834" Jan 30 05:27:30 crc kubenswrapper[4931]: E0130 05:27:30.684043 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834\": container with ID starting with 1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834 not found: ID does not exist" containerID="1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.684095 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834"} err="failed to get container status \"1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834\": rpc error: code = NotFound desc = could not find container \"1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834\": container with ID starting with 1c290deb1ddccdee82913980b6f27c116b92dbaa10273cd5835fbf2da7ff3834 not found: ID does not exist" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.684113 4931 scope.go:117] "RemoveContainer" containerID="fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb" Jan 30 05:27:30 crc kubenswrapper[4931]: E0130 05:27:30.684335 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb\": container with ID starting with fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb not found: ID does not exist" containerID="fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.684359 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb"} err="failed to get container status \"fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb\": rpc error: code = NotFound desc = could not find container \"fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb\": container with ID starting with fff24e78cf96f52120648f1c7bc9b2d5e91c3ff0f2cf372e795cdf62b3b76dfb not found: ID does not exist" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.684375 4931 scope.go:117] "RemoveContainer" containerID="c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6" Jan 30 05:27:30 crc kubenswrapper[4931]: E0130 05:27:30.684651 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6\": container with ID starting with c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6 not found: ID does not exist" containerID="c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.684677 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6"} err="failed to get container status \"c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6\": rpc error: code = NotFound desc = could not find container \"c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6\": container with ID starting with c6de91be4c5b640d1c85b573a20a5ff3615c8377869fa35e5e5c44374ab7f6c6 not found: ID does not exist" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.707488 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.732039 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.736546 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.736569 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9n8v\" (UniqueName: \"kubernetes.io/projected/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-kube-api-access-f9n8v\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.736585 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.736597 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.736608 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.736618 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.736629 4931 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.757842 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-config-data" (OuterVolumeSpecName: "config-data") pod "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" (UID: "a8c888ca-c1fd-452f-9fd2-ff821f4b18e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.838732 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.980116 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:30 crc kubenswrapper[4931]: I0130 05:27:30.997804 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.007704 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:31 crc kubenswrapper[4931]: E0130 05:27:31.008128 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="proxy-httpd" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008150 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="proxy-httpd" Jan 30 05:27:31 crc kubenswrapper[4931]: E0130 05:27:31.008180 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="sg-core" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008189 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="sg-core" Jan 30 05:27:31 crc kubenswrapper[4931]: E0130 05:27:31.008206 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-notification-agent" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008214 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-notification-agent" Jan 30 05:27:31 crc kubenswrapper[4931]: E0130 05:27:31.008230 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-central-agent" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008238 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-central-agent" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008477 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="sg-core" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008508 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-notification-agent" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008519 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="ceilometer-central-agent" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.008528 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" containerName="proxy-httpd" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.021137 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.021252 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.024168 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.024572 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.025726 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.146777 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.146817 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xw72\" (UniqueName: \"kubernetes.io/projected/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-kube-api-access-9xw72\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.146835 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.146895 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-run-httpd\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.146926 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-log-httpd\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.147048 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-scripts\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.147152 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.147238 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-config-data\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.248802 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.249108 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-config-data\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.249491 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.249604 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xw72\" (UniqueName: \"kubernetes.io/projected/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-kube-api-access-9xw72\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.249701 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.249867 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-run-httpd\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.250020 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-log-httpd\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.250219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-run-httpd\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.250393 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-log-httpd\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.251669 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-scripts\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.254052 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.254881 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.255784 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-config-data\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.264452 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.269358 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-scripts\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.272858 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xw72\" (UniqueName: \"kubernetes.io/projected/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-kube-api-access-9xw72\") pod \"ceilometer-0\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.363542 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.446879 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c888ca-c1fd-452f-9fd2-ff821f4b18e5" path="/var/lib/kubelet/pods/a8c888ca-c1fd-452f-9fd2-ff821f4b18e5/volumes" Jan 30 05:27:31 crc kubenswrapper[4931]: W0130 05:27:31.846492 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf8c845_e69c_41e6_9e59_4b9b9fceaaf7.slice/crio-011e359b19d615fee43023e3c1e45b97d099d18b94cb63d33b5905aec3a68e27 WatchSource:0}: Error finding container 011e359b19d615fee43023e3c1e45b97d099d18b94cb63d33b5905aec3a68e27: Status 404 returned error can't find the container with id 011e359b19d615fee43023e3c1e45b97d099d18b94cb63d33b5905aec3a68e27 Jan 30 05:27:31 crc kubenswrapper[4931]: I0130 05:27:31.847605 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:27:32 crc kubenswrapper[4931]: I0130 05:27:32.408134 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 05:27:32 crc kubenswrapper[4931]: I0130 05:27:32.586018 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerStarted","Data":"011e359b19d615fee43023e3c1e45b97d099d18b94cb63d33b5905aec3a68e27"} Jan 30 05:27:32 crc kubenswrapper[4931]: I0130 05:27:32.586054 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:27:32 crc kubenswrapper[4931]: I0130 05:27:32.651664 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 05:27:33 crc kubenswrapper[4931]: I0130 05:27:33.596408 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerStarted","Data":"782c5f8dbe0e7576669ed328ec36323e5d152ca2c37c77db802604122975e914"} Jan 30 05:27:34 crc kubenswrapper[4931]: I0130 05:27:34.641105 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerStarted","Data":"89d3e2e5267fbad75d566e6ba9ac104cbc8326782fb62f0bd4ed8c4f9b169c76"} Jan 30 05:27:35 crc kubenswrapper[4931]: I0130 05:27:35.653262 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerStarted","Data":"37cec202033bdc3e70a415b285410f6ce8158b9541b9d20f36bd938b5978559c"} Jan 30 05:27:36 crc kubenswrapper[4931]: I0130 05:27:36.666902 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerStarted","Data":"149c802638d11431261e2009e655ac397a1354f084b06d7f2da2c77118f48d07"} Jan 30 05:27:36 crc kubenswrapper[4931]: I0130 05:27:36.668725 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 05:27:36 crc kubenswrapper[4931]: I0130 05:27:36.707547 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.268554896 podStartE2EDuration="6.707522656s" podCreationTimestamp="2026-01-30 05:27:30 +0000 UTC" firstStartedPulling="2026-01-30 05:27:31.850128902 +0000 UTC m=+1187.220039179" lastFinishedPulling="2026-01-30 05:27:36.289096672 +0000 UTC m=+1191.659006939" observedRunningTime="2026-01-30 05:27:36.689101828 +0000 UTC m=+1192.059012125" watchObservedRunningTime="2026-01-30 05:27:36.707522656 +0000 UTC m=+1192.077432953" Jan 30 05:27:43 crc kubenswrapper[4931]: I0130 05:27:43.772164 4931 generic.go:334] "Generic (PLEG): container finished" podID="ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" containerID="c2771265ae8a990e0e69c0f116c64cc25eecd94f6e185173fb3394e2e6fbe468" exitCode=0 Jan 30 05:27:43 crc kubenswrapper[4931]: I0130 05:27:43.772232 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s287f" event={"ID":"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b","Type":"ContainerDied","Data":"c2771265ae8a990e0e69c0f116c64cc25eecd94f6e185173fb3394e2e6fbe468"} Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.194217 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.271257 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-config-data\") pod \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.271300 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trzrx\" (UniqueName: \"kubernetes.io/projected/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-kube-api-access-trzrx\") pod \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.271473 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-combined-ca-bundle\") pod \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.271545 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-scripts\") pod \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\" (UID: \"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b\") " Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.279767 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-scripts" (OuterVolumeSpecName: "scripts") pod "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" (UID: "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.280022 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-kube-api-access-trzrx" (OuterVolumeSpecName: "kube-api-access-trzrx") pod "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" (UID: "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b"). InnerVolumeSpecName "kube-api-access-trzrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.306230 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" (UID: "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.308808 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-config-data" (OuterVolumeSpecName: "config-data") pod "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" (UID: "ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.374058 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.374107 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.374130 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trzrx\" (UniqueName: \"kubernetes.io/projected/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-kube-api-access-trzrx\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.374150 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.813762 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-s287f" event={"ID":"ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b","Type":"ContainerDied","Data":"33ba5db3481aa96c9b6d1d5ea1daf2013941f187605ad329f6f6d5a0b2ba2f94"} Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.813821 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ba5db3481aa96c9b6d1d5ea1daf2013941f187605ad329f6f6d5a0b2ba2f94" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.813920 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-s287f" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.956751 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 05:27:45 crc kubenswrapper[4931]: E0130 05:27:45.957410 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" containerName="nova-cell0-conductor-db-sync" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.957474 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" containerName="nova-cell0-conductor-db-sync" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.957889 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" containerName="nova-cell0-conductor-db-sync" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.959005 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:45 crc kubenswrapper[4931]: I0130 05:27:45.966853 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.002319 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5k4cm" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.002732 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.103938 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hln5j\" (UniqueName: \"kubernetes.io/projected/88988b92-cd64-490d-b55f-959ecf4095af-kube-api-access-hln5j\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.104274 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.104460 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.206642 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hln5j\" (UniqueName: \"kubernetes.io/projected/88988b92-cd64-490d-b55f-959ecf4095af-kube-api-access-hln5j\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.206899 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.206949 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.224500 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.224537 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.229988 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hln5j\" (UniqueName: \"kubernetes.io/projected/88988b92-cd64-490d-b55f-959ecf4095af-kube-api-access-hln5j\") pod \"nova-cell0-conductor-0\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.337691 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:46 crc kubenswrapper[4931]: I0130 05:27:46.903376 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 05:27:47 crc kubenswrapper[4931]: I0130 05:27:47.841313 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"88988b92-cd64-490d-b55f-959ecf4095af","Type":"ContainerStarted","Data":"83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06"} Jan 30 05:27:47 crc kubenswrapper[4931]: I0130 05:27:47.841721 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"88988b92-cd64-490d-b55f-959ecf4095af","Type":"ContainerStarted","Data":"56a8c3403b77c67382071da65bd384ea85d43f4776ebb7971c9a14fd4e392984"} Jan 30 05:27:47 crc kubenswrapper[4931]: I0130 05:27:47.841794 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:47 crc kubenswrapper[4931]: I0130 05:27:47.871601 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.87158233 podStartE2EDuration="2.87158233s" podCreationTimestamp="2026-01-30 05:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:27:47.871252472 +0000 UTC m=+1203.241162759" watchObservedRunningTime="2026-01-30 05:27:47.87158233 +0000 UTC m=+1203.241492587" Jan 30 05:27:56 crc kubenswrapper[4931]: I0130 05:27:56.367616 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 05:27:56 crc kubenswrapper[4931]: I0130 05:27:56.977445 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9b8l8"] Jan 30 05:27:56 crc kubenswrapper[4931]: I0130 05:27:56.979257 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:56 crc kubenswrapper[4931]: I0130 05:27:56.982020 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 05:27:56 crc kubenswrapper[4931]: I0130 05:27:56.982395 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 05:27:56 crc kubenswrapper[4931]: I0130 05:27:56.992054 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9b8l8"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.067376 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-scripts\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.067788 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf54v\" (UniqueName: \"kubernetes.io/projected/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-kube-api-access-sf54v\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.067891 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-config-data\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.067934 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.170643 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf54v\" (UniqueName: \"kubernetes.io/projected/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-kube-api-access-sf54v\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.170732 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-config-data\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.170763 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.170808 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-scripts\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.193022 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.194105 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-config-data\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.208876 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-scripts\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.241306 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf54v\" (UniqueName: \"kubernetes.io/projected/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-kube-api-access-sf54v\") pod \"nova-cell0-cell-mapping-9b8l8\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.261727 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.263403 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.271769 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.310516 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.311992 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.317752 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.318136 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.325595 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.358489 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.382009 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.382141 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwsz9\" (UniqueName: \"kubernetes.io/projected/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-kube-api-access-mwsz9\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.382180 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.437773 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.446567 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.461862 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.487375 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.487485 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.490531 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76c52b2-cfad-4017-a265-142c8e1b54f9-logs\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.490623 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9czc\" (UniqueName: \"kubernetes.io/projected/f76c52b2-cfad-4017-a265-142c8e1b54f9-kube-api-access-z9czc\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.490678 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.490700 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-config-data\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.490862 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwsz9\" (UniqueName: \"kubernetes.io/projected/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-kube-api-access-mwsz9\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.499439 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.501042 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.531073 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.531132 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.532096 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.537827 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.544066 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwsz9\" (UniqueName: \"kubernetes.io/projected/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-kube-api-access-mwsz9\") pod \"nova-cell1-novncproxy-0\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.578488 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595046 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-config-data\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595180 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-config-data\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595288 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37fcb81-9df1-411b-b593-8ca56c518f33-logs\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595342 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76c52b2-cfad-4017-a265-142c8e1b54f9-logs\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595361 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595396 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9czc\" (UniqueName: \"kubernetes.io/projected/f76c52b2-cfad-4017-a265-142c8e1b54f9-kube-api-access-z9czc\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595429 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84h6\" (UniqueName: \"kubernetes.io/projected/e37fcb81-9df1-411b-b593-8ca56c518f33-kube-api-access-m84h6\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.595467 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.597276 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76c52b2-cfad-4017-a265-142c8e1b54f9-logs\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.602074 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.617828 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-config-data\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.625678 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9czc\" (UniqueName: \"kubernetes.io/projected/f76c52b2-cfad-4017-a265-142c8e1b54f9-kube-api-access-z9czc\") pod \"nova-api-0\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.640284 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-p25hj"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.644490 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.653461 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-p25hj"] Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.655863 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.699783 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37fcb81-9df1-411b-b593-8ca56c518f33-logs\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.699899 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.699938 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m84h6\" (UniqueName: \"kubernetes.io/projected/e37fcb81-9df1-411b-b593-8ca56c518f33-kube-api-access-m84h6\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.699985 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.700057 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-config-data\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.700091 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-config-data\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.700942 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37fcb81-9df1-411b-b593-8ca56c518f33-logs\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.703664 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xlx\" (UniqueName: \"kubernetes.io/projected/018cf21e-c9c2-4ab4-8794-f6e066bafc86-kube-api-access-24xlx\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.706744 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-config-data\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.707655 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.725837 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m84h6\" (UniqueName: \"kubernetes.io/projected/e37fcb81-9df1-411b-b593-8ca56c518f33-kube-api-access-m84h6\") pod \"nova-metadata-0\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805102 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-config-data\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805191 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805239 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24xlx\" (UniqueName: \"kubernetes.io/projected/018cf21e-c9c2-4ab4-8794-f6e066bafc86-kube-api-access-24xlx\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805306 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805338 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805369 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805407 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805442 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftvmx\" (UniqueName: \"kubernetes.io/projected/4dc14ae7-f05f-4093-838b-bdd419f4302f-kube-api-access-ftvmx\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.805482 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-config\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.809485 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.810768 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-config-data\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.817312 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.828330 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24xlx\" (UniqueName: \"kubernetes.io/projected/018cf21e-c9c2-4ab4-8794-f6e066bafc86-kube-api-access-24xlx\") pod \"nova-scheduler-0\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.834192 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.860735 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.913025 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-config\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.913144 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.913237 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.913271 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.913326 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.913347 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftvmx\" (UniqueName: \"kubernetes.io/projected/4dc14ae7-f05f-4093-838b-bdd419f4302f-kube-api-access-ftvmx\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.914584 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-config\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.914584 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.914730 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.915290 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.915580 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.938414 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftvmx\" (UniqueName: \"kubernetes.io/projected/4dc14ae7-f05f-4093-838b-bdd419f4302f-kube-api-access-ftvmx\") pod \"dnsmasq-dns-557bbc7df7-p25hj\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:57 crc kubenswrapper[4931]: I0130 05:27:57.998345 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.043798 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9b8l8"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.103498 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhn9j"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.104746 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.107099 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.107723 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.115932 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhn9j"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.180696 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.222055 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.222095 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-scripts\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.222143 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-config-data\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.222197 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9q6\" (UniqueName: \"kubernetes.io/projected/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-kube-api-access-dc9q6\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.304473 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.324634 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-config-data\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.324715 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc9q6\" (UniqueName: \"kubernetes.io/projected/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-kube-api-access-dc9q6\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.324795 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.324818 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-scripts\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.333164 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-scripts\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.334926 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-config-data\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.335433 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.340595 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc9q6\" (UniqueName: \"kubernetes.io/projected/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-kube-api-access-dc9q6\") pod \"nova-cell1-conductor-db-sync-jhn9j\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.437324 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.479396 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.521884 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.665459 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-p25hj"] Jan 30 05:27:58 crc kubenswrapper[4931]: W0130 05:27:58.679483 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dc14ae7_f05f_4093_838b_bdd419f4302f.slice/crio-8d7831c7e52a58c64227ffa0e77bd37bd5e37f6fcdda5357867d37c004b57577 WatchSource:0}: Error finding container 8d7831c7e52a58c64227ffa0e77bd37bd5e37f6fcdda5357867d37c004b57577: Status 404 returned error can't find the container with id 8d7831c7e52a58c64227ffa0e77bd37bd5e37f6fcdda5357867d37c004b57577 Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.908653 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhn9j"] Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.993109 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f76c52b2-cfad-4017-a265-142c8e1b54f9","Type":"ContainerStarted","Data":"ce62ba9a7d987c86fe58643c3364d299b44cd5fdb6e5e34e5b8f029af4afb80d"} Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.995045 4931 generic.go:334] "Generic (PLEG): container finished" podID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerID="056aa11a16b72fe7fde4370093154af79d24b07c3142cb8943c78be2016d3fc6" exitCode=0 Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.995269 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" event={"ID":"4dc14ae7-f05f-4093-838b-bdd419f4302f","Type":"ContainerDied","Data":"056aa11a16b72fe7fde4370093154af79d24b07c3142cb8943c78be2016d3fc6"} Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.995309 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" event={"ID":"4dc14ae7-f05f-4093-838b-bdd419f4302f","Type":"ContainerStarted","Data":"8d7831c7e52a58c64227ffa0e77bd37bd5e37f6fcdda5357867d37c004b57577"} Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.996259 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3","Type":"ContainerStarted","Data":"77d36bb8a11804c505d48bebee2dbafaeb19326f0f1ced3b73af355d57d86b2b"} Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.998585 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9b8l8" event={"ID":"b0a8f8fe-306a-4373-bbb0-d96f2b498d62","Type":"ContainerStarted","Data":"346e5462e41c54c8f5c2422490f080d2b64f85c405ea5cc5337aa66fee775153"} Jan 30 05:27:58 crc kubenswrapper[4931]: I0130 05:27:58.998615 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9b8l8" event={"ID":"b0a8f8fe-306a-4373-bbb0-d96f2b498d62","Type":"ContainerStarted","Data":"831b8fc29d43f639d90655aa063689063afad41ee488b8e379f4edd22fec355a"} Jan 30 05:27:59 crc kubenswrapper[4931]: I0130 05:27:59.004388 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"018cf21e-c9c2-4ab4-8794-f6e066bafc86","Type":"ContainerStarted","Data":"f850561dfba344195d6aaf76e50b967d79d23fca8dbd9d77fa357655bcad14dc"} Jan 30 05:27:59 crc kubenswrapper[4931]: I0130 05:27:59.009008 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e37fcb81-9df1-411b-b593-8ca56c518f33","Type":"ContainerStarted","Data":"78bd5fff55edf34b6d1b969823b41d22ccbf3eabadd222063f311257dc45d17d"} Jan 30 05:27:59 crc kubenswrapper[4931]: I0130 05:27:59.014519 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" event={"ID":"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f","Type":"ContainerStarted","Data":"05fd8e29ff186451c0c9db4ef3d2f2174b59837370d11120981136e0fa9ba630"} Jan 30 05:27:59 crc kubenswrapper[4931]: I0130 05:27:59.036373 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9b8l8" podStartSLOduration=3.03632974 podStartE2EDuration="3.03632974s" podCreationTimestamp="2026-01-30 05:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:27:59.028360343 +0000 UTC m=+1214.398270620" watchObservedRunningTime="2026-01-30 05:27:59.03632974 +0000 UTC m=+1214.406240007" Jan 30 05:28:00 crc kubenswrapper[4931]: I0130 05:28:00.024541 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" event={"ID":"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f","Type":"ContainerStarted","Data":"6fe17572613dbd341b30bd762c1b5735b49c432308fe3f9a9ede6d5185282afe"} Jan 30 05:28:00 crc kubenswrapper[4931]: I0130 05:28:00.028503 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" event={"ID":"4dc14ae7-f05f-4093-838b-bdd419f4302f","Type":"ContainerStarted","Data":"a729151ede12640ea81c41f5d7f2d36efd861e7a4d31b991fe42dd4d2139fbe2"} Jan 30 05:28:00 crc kubenswrapper[4931]: I0130 05:28:00.028855 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:28:00 crc kubenswrapper[4931]: I0130 05:28:00.044706 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" podStartSLOduration=2.044685188 podStartE2EDuration="2.044685188s" podCreationTimestamp="2026-01-30 05:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:00.040378416 +0000 UTC m=+1215.410288693" watchObservedRunningTime="2026-01-30 05:28:00.044685188 +0000 UTC m=+1215.414595455" Jan 30 05:28:00 crc kubenswrapper[4931]: I0130 05:28:00.065781 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" podStartSLOduration=3.065760213 podStartE2EDuration="3.065760213s" podCreationTimestamp="2026-01-30 05:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:00.056111364 +0000 UTC m=+1215.426021661" watchObservedRunningTime="2026-01-30 05:28:00.065760213 +0000 UTC m=+1215.435670470" Jan 30 05:28:01 crc kubenswrapper[4931]: I0130 05:28:01.387123 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 05:28:01 crc kubenswrapper[4931]: I0130 05:28:01.753385 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:28:01 crc kubenswrapper[4931]: I0130 05:28:01.770764 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.051961 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"018cf21e-c9c2-4ab4-8794-f6e066bafc86","Type":"ContainerStarted","Data":"56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16"} Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.055139 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e37fcb81-9df1-411b-b593-8ca56c518f33","Type":"ContainerStarted","Data":"5894edfe624e331ed6304798892205879e56e8c005b21dd019bad175228231c6"} Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.055199 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e37fcb81-9df1-411b-b593-8ca56c518f33","Type":"ContainerStarted","Data":"7b8d1ff499fbb8ebcbcbf87555e175867c57e89396d957fe7b851fe96dc36245"} Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.055266 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-log" containerID="cri-o://7b8d1ff499fbb8ebcbcbf87555e175867c57e89396d957fe7b851fe96dc36245" gracePeriod=30 Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.055296 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-metadata" containerID="cri-o://5894edfe624e331ed6304798892205879e56e8c005b21dd019bad175228231c6" gracePeriod=30 Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.058068 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f76c52b2-cfad-4017-a265-142c8e1b54f9","Type":"ContainerStarted","Data":"0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386"} Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.058245 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f76c52b2-cfad-4017-a265-142c8e1b54f9","Type":"ContainerStarted","Data":"bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c"} Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.062492 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3","Type":"ContainerStarted","Data":"8389df76ed09df3262a31c554555764b5a96d65170cca0da5f3f94ed26654194"} Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.062835 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8389df76ed09df3262a31c554555764b5a96d65170cca0da5f3f94ed26654194" gracePeriod=30 Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.083615 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.269045106 podStartE2EDuration="5.083596868s" podCreationTimestamp="2026-01-30 05:27:57 +0000 UTC" firstStartedPulling="2026-01-30 05:27:58.53289076 +0000 UTC m=+1213.902801017" lastFinishedPulling="2026-01-30 05:28:01.347442502 +0000 UTC m=+1216.717352779" observedRunningTime="2026-01-30 05:28:02.077982172 +0000 UTC m=+1217.447892429" watchObservedRunningTime="2026-01-30 05:28:02.083596868 +0000 UTC m=+1217.453507125" Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.101810 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.245758863 podStartE2EDuration="5.10178197s" podCreationTimestamp="2026-01-30 05:27:57 +0000 UTC" firstStartedPulling="2026-01-30 05:27:58.485631014 +0000 UTC m=+1213.855541271" lastFinishedPulling="2026-01-30 05:28:01.341654111 +0000 UTC m=+1216.711564378" observedRunningTime="2026-01-30 05:28:02.096315868 +0000 UTC m=+1217.466226125" watchObservedRunningTime="2026-01-30 05:28:02.10178197 +0000 UTC m=+1217.471692227" Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.129263 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.954664902 podStartE2EDuration="5.129239312s" podCreationTimestamp="2026-01-30 05:27:57 +0000 UTC" firstStartedPulling="2026-01-30 05:27:58.166724482 +0000 UTC m=+1213.536634739" lastFinishedPulling="2026-01-30 05:28:01.341298892 +0000 UTC m=+1216.711209149" observedRunningTime="2026-01-30 05:28:02.119544341 +0000 UTC m=+1217.489454618" watchObservedRunningTime="2026-01-30 05:28:02.129239312 +0000 UTC m=+1217.499149569" Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.152584 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.114482617 podStartE2EDuration="5.152560627s" podCreationTimestamp="2026-01-30 05:27:57 +0000 UTC" firstStartedPulling="2026-01-30 05:27:58.304938947 +0000 UTC m=+1213.674849204" lastFinishedPulling="2026-01-30 05:28:01.343016937 +0000 UTC m=+1216.712927214" observedRunningTime="2026-01-30 05:28:02.138364659 +0000 UTC m=+1217.508274916" watchObservedRunningTime="2026-01-30 05:28:02.152560627 +0000 UTC m=+1217.522470884" Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.656868 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.835679 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.836044 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 05:28:02 crc kubenswrapper[4931]: I0130 05:28:02.861829 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 05:28:03 crc kubenswrapper[4931]: I0130 05:28:03.071987 4931 generic.go:334] "Generic (PLEG): container finished" podID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerID="7b8d1ff499fbb8ebcbcbf87555e175867c57e89396d957fe7b851fe96dc36245" exitCode=143 Jan 30 05:28:03 crc kubenswrapper[4931]: I0130 05:28:03.073094 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e37fcb81-9df1-411b-b593-8ca56c518f33","Type":"ContainerDied","Data":"7b8d1ff499fbb8ebcbcbf87555e175867c57e89396d957fe7b851fe96dc36245"} Jan 30 05:28:06 crc kubenswrapper[4931]: I0130 05:28:06.106077 4931 generic.go:334] "Generic (PLEG): container finished" podID="9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" containerID="6fe17572613dbd341b30bd762c1b5735b49c432308fe3f9a9ede6d5185282afe" exitCode=0 Jan 30 05:28:06 crc kubenswrapper[4931]: I0130 05:28:06.106172 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" event={"ID":"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f","Type":"ContainerDied","Data":"6fe17572613dbd341b30bd762c1b5735b49c432308fe3f9a9ede6d5185282afe"} Jan 30 05:28:06 crc kubenswrapper[4931]: I0130 05:28:06.112004 4931 generic.go:334] "Generic (PLEG): container finished" podID="b0a8f8fe-306a-4373-bbb0-d96f2b498d62" containerID="346e5462e41c54c8f5c2422490f080d2b64f85c405ea5cc5337aa66fee775153" exitCode=0 Jan 30 05:28:06 crc kubenswrapper[4931]: I0130 05:28:06.112062 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9b8l8" event={"ID":"b0a8f8fe-306a-4373-bbb0-d96f2b498d62","Type":"ContainerDied","Data":"346e5462e41c54c8f5c2422490f080d2b64f85c405ea5cc5337aa66fee775153"} Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.494915 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.605413 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.622622 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-combined-ca-bundle\") pod \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.622823 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-config-data\") pod \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.623090 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc9q6\" (UniqueName: \"kubernetes.io/projected/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-kube-api-access-dc9q6\") pod \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.623357 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-scripts\") pod \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\" (UID: \"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.630836 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-scripts" (OuterVolumeSpecName: "scripts") pod "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" (UID: "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.631377 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-kube-api-access-dc9q6" (OuterVolumeSpecName: "kube-api-access-dc9q6") pod "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" (UID: "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f"). InnerVolumeSpecName "kube-api-access-dc9q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.654838 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-config-data" (OuterVolumeSpecName: "config-data") pod "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" (UID: "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.684938 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" (UID: "9262fbc3-2503-4252-b2dd-10cd8dcfbd6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.727156 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-scripts\") pod \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.727362 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-combined-ca-bundle\") pod \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.727492 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf54v\" (UniqueName: \"kubernetes.io/projected/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-kube-api-access-sf54v\") pod \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.727613 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-config-data\") pod \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\" (UID: \"b0a8f8fe-306a-4373-bbb0-d96f2b498d62\") " Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.728337 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.728372 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.728393 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc9q6\" (UniqueName: \"kubernetes.io/projected/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-kube-api-access-dc9q6\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.728413 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.729407 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-scripts" (OuterVolumeSpecName: "scripts") pod "b0a8f8fe-306a-4373-bbb0-d96f2b498d62" (UID: "b0a8f8fe-306a-4373-bbb0-d96f2b498d62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.730771 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-kube-api-access-sf54v" (OuterVolumeSpecName: "kube-api-access-sf54v") pod "b0a8f8fe-306a-4373-bbb0-d96f2b498d62" (UID: "b0a8f8fe-306a-4373-bbb0-d96f2b498d62"). InnerVolumeSpecName "kube-api-access-sf54v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.749765 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0a8f8fe-306a-4373-bbb0-d96f2b498d62" (UID: "b0a8f8fe-306a-4373-bbb0-d96f2b498d62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.752134 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-config-data" (OuterVolumeSpecName: "config-data") pod "b0a8f8fe-306a-4373-bbb0-d96f2b498d62" (UID: "b0a8f8fe-306a-4373-bbb0-d96f2b498d62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.811354 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.811442 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.831130 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.831206 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.831237 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf54v\" (UniqueName: \"kubernetes.io/projected/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-kube-api-access-sf54v\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.831264 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a8f8fe-306a-4373-bbb0-d96f2b498d62-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.861765 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.907259 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 05:28:07 crc kubenswrapper[4931]: I0130 05:28:07.999599 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.074355 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-vxxmk"] Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.074755 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" podUID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerName="dnsmasq-dns" containerID="cri-o://1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f" gracePeriod=10 Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.163977 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9b8l8" event={"ID":"b0a8f8fe-306a-4373-bbb0-d96f2b498d62","Type":"ContainerDied","Data":"831b8fc29d43f639d90655aa063689063afad41ee488b8e379f4edd22fec355a"} Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.164547 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="831b8fc29d43f639d90655aa063689063afad41ee488b8e379f4edd22fec355a" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.164653 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9b8l8" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.167736 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.170603 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jhn9j" event={"ID":"9262fbc3-2503-4252-b2dd-10cd8dcfbd6f","Type":"ContainerDied","Data":"05fd8e29ff186451c0c9db4ef3d2f2174b59837370d11120981136e0fa9ba630"} Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.170641 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05fd8e29ff186451c0c9db4ef3d2f2174b59837370d11120981136e0fa9ba630" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.240777 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.247897 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 05:28:08 crc kubenswrapper[4931]: E0130 05:28:08.248377 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" containerName="nova-cell1-conductor-db-sync" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.248402 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" containerName="nova-cell1-conductor-db-sync" Jan 30 05:28:08 crc kubenswrapper[4931]: E0130 05:28:08.248498 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a8f8fe-306a-4373-bbb0-d96f2b498d62" containerName="nova-manage" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.248509 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a8f8fe-306a-4373-bbb0-d96f2b498d62" containerName="nova-manage" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.248787 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a8f8fe-306a-4373-bbb0-d96f2b498d62" containerName="nova-manage" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.248828 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" containerName="nova-cell1-conductor-db-sync" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.249561 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.252251 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.279062 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.341762 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.341925 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.342029 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67hmc\" (UniqueName: \"kubernetes.io/projected/9bb44c01-e79f-42d8-912c-66db07c6b328-kube-api-access-67hmc\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.444990 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.461849 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67hmc\" (UniqueName: \"kubernetes.io/projected/9bb44c01-e79f-42d8-912c-66db07c6b328-kube-api-access-67hmc\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.461911 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.458041 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.465299 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.478676 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.479015 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-log" containerID="cri-o://bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c" gracePeriod=30 Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.479686 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-api" containerID="cri-o://0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386" gracePeriod=30 Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.485793 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": EOF" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.494474 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67hmc\" (UniqueName: \"kubernetes.io/projected/9bb44c01-e79f-42d8-912c-66db07c6b328-kube-api-access-67hmc\") pod \"nova-cell1-conductor-0\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.506671 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": EOF" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.586592 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.669663 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.744291 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.766404 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-sb\") pod \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.766526 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-config\") pod \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.766771 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-svc\") pod \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.767022 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-682nm\" (UniqueName: \"kubernetes.io/projected/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-kube-api-access-682nm\") pod \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.767083 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-nb\") pod \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.767107 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-swift-storage-0\") pod \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\" (UID: \"98fe74d3-fa52-4814-8497-1a9bb9ea72ed\") " Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.776620 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-kube-api-access-682nm" (OuterVolumeSpecName: "kube-api-access-682nm") pod "98fe74d3-fa52-4814-8497-1a9bb9ea72ed" (UID: "98fe74d3-fa52-4814-8497-1a9bb9ea72ed"). InnerVolumeSpecName "kube-api-access-682nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.832359 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98fe74d3-fa52-4814-8497-1a9bb9ea72ed" (UID: "98fe74d3-fa52-4814-8497-1a9bb9ea72ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.846504 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98fe74d3-fa52-4814-8497-1a9bb9ea72ed" (UID: "98fe74d3-fa52-4814-8497-1a9bb9ea72ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.852192 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98fe74d3-fa52-4814-8497-1a9bb9ea72ed" (UID: "98fe74d3-fa52-4814-8497-1a9bb9ea72ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.856539 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98fe74d3-fa52-4814-8497-1a9bb9ea72ed" (UID: "98fe74d3-fa52-4814-8497-1a9bb9ea72ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.856926 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-config" (OuterVolumeSpecName: "config") pod "98fe74d3-fa52-4814-8497-1a9bb9ea72ed" (UID: "98fe74d3-fa52-4814-8497-1a9bb9ea72ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.869391 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.869413 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-682nm\" (UniqueName: \"kubernetes.io/projected/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-kube-api-access-682nm\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.869463 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.869472 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.869500 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:08 crc kubenswrapper[4931]: I0130 05:28:08.869511 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98fe74d3-fa52-4814-8497-1a9bb9ea72ed-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.094555 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 05:28:09 crc kubenswrapper[4931]: W0130 05:28:09.099654 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bb44c01_e79f_42d8_912c_66db07c6b328.slice/crio-1c009cefddaacdb91464295ae716d32cc8f92887e0517a9011cacb821ea578bd WatchSource:0}: Error finding container 1c009cefddaacdb91464295ae716d32cc8f92887e0517a9011cacb821ea578bd: Status 404 returned error can't find the container with id 1c009cefddaacdb91464295ae716d32cc8f92887e0517a9011cacb821ea578bd Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.183567 4931 generic.go:334] "Generic (PLEG): container finished" podID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerID="1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f" exitCode=0 Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.183633 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" event={"ID":"98fe74d3-fa52-4814-8497-1a9bb9ea72ed","Type":"ContainerDied","Data":"1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f"} Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.183676 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" event={"ID":"98fe74d3-fa52-4814-8497-1a9bb9ea72ed","Type":"ContainerDied","Data":"e1641f306bf07b5142c6dd94dd4d7be821af4a934007d916dd0dd69749c5f578"} Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.183697 4931 scope.go:117] "RemoveContainer" containerID="1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.183604 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-vxxmk" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.185048 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9bb44c01-e79f-42d8-912c-66db07c6b328","Type":"ContainerStarted","Data":"1c009cefddaacdb91464295ae716d32cc8f92887e0517a9011cacb821ea578bd"} Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.188273 4931 generic.go:334] "Generic (PLEG): container finished" podID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerID="bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c" exitCode=143 Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.188481 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f76c52b2-cfad-4017-a265-142c8e1b54f9","Type":"ContainerDied","Data":"bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c"} Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.222164 4931 scope.go:117] "RemoveContainer" containerID="096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.238937 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-vxxmk"] Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.259316 4931 scope.go:117] "RemoveContainer" containerID="1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f" Jan 30 05:28:09 crc kubenswrapper[4931]: E0130 05:28:09.259955 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f\": container with ID starting with 1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f not found: ID does not exist" containerID="1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.260032 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f"} err="failed to get container status \"1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f\": rpc error: code = NotFound desc = could not find container \"1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f\": container with ID starting with 1b840f2239f8f8dd8ec7c99cc9ea368efa35cb6a2c53f91934fe0a2a26cfb59f not found: ID does not exist" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.260065 4931 scope.go:117] "RemoveContainer" containerID="096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd" Jan 30 05:28:09 crc kubenswrapper[4931]: E0130 05:28:09.260414 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd\": container with ID starting with 096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd not found: ID does not exist" containerID="096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.260445 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd"} err="failed to get container status \"096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd\": rpc error: code = NotFound desc = could not find container \"096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd\": container with ID starting with 096e49c6163153a9fa1821f983bb1d1d489d24729a6de12e0a2cbc9a181c87cd not found: ID does not exist" Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.279861 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-vxxmk"] Jan 30 05:28:09 crc kubenswrapper[4931]: I0130 05:28:09.434373 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" path="/var/lib/kubelet/pods/98fe74d3-fa52-4814-8497-1a9bb9ea72ed/volumes" Jan 30 05:28:10 crc kubenswrapper[4931]: I0130 05:28:10.197604 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9bb44c01-e79f-42d8-912c-66db07c6b328","Type":"ContainerStarted","Data":"9aac5eb9a7735bf1efc6d134170c4297691e32c19fa1a2cd01ab0ae918243436"} Jan 30 05:28:10 crc kubenswrapper[4931]: I0130 05:28:10.198000 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:10 crc kubenswrapper[4931]: I0130 05:28:10.199239 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="018cf21e-c9c2-4ab4-8794-f6e066bafc86" containerName="nova-scheduler-scheduler" containerID="cri-o://56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16" gracePeriod=30 Jan 30 05:28:10 crc kubenswrapper[4931]: I0130 05:28:10.215058 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.2150401730000002 podStartE2EDuration="2.215040173s" podCreationTimestamp="2026-01-30 05:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:10.213943615 +0000 UTC m=+1225.583853872" watchObservedRunningTime="2026-01-30 05:28:10.215040173 +0000 UTC m=+1225.584950430" Jan 30 05:28:12 crc kubenswrapper[4931]: E0130 05:28:12.874464 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:28:12 crc kubenswrapper[4931]: E0130 05:28:12.881061 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:28:12 crc kubenswrapper[4931]: E0130 05:28:12.884152 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:28:12 crc kubenswrapper[4931]: E0130 05:28:12.884234 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="018cf21e-c9c2-4ab4-8794-f6e066bafc86" containerName="nova-scheduler-scheduler" Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.238129 4931 generic.go:334] "Generic (PLEG): container finished" podID="018cf21e-c9c2-4ab4-8794-f6e066bafc86" containerID="56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16" exitCode=0 Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.238184 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"018cf21e-c9c2-4ab4-8794-f6e066bafc86","Type":"ContainerDied","Data":"56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16"} Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.766792 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.811077 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24xlx\" (UniqueName: \"kubernetes.io/projected/018cf21e-c9c2-4ab4-8794-f6e066bafc86-kube-api-access-24xlx\") pod \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.811278 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-config-data\") pod \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.811382 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-combined-ca-bundle\") pod \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\" (UID: \"018cf21e-c9c2-4ab4-8794-f6e066bafc86\") " Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.818088 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018cf21e-c9c2-4ab4-8794-f6e066bafc86-kube-api-access-24xlx" (OuterVolumeSpecName: "kube-api-access-24xlx") pod "018cf21e-c9c2-4ab4-8794-f6e066bafc86" (UID: "018cf21e-c9c2-4ab4-8794-f6e066bafc86"). InnerVolumeSpecName "kube-api-access-24xlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.849806 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "018cf21e-c9c2-4ab4-8794-f6e066bafc86" (UID: "018cf21e-c9c2-4ab4-8794-f6e066bafc86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.866816 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-config-data" (OuterVolumeSpecName: "config-data") pod "018cf21e-c9c2-4ab4-8794-f6e066bafc86" (UID: "018cf21e-c9c2-4ab4-8794-f6e066bafc86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.913824 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24xlx\" (UniqueName: \"kubernetes.io/projected/018cf21e-c9c2-4ab4-8794-f6e066bafc86-kube-api-access-24xlx\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.913859 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:13 crc kubenswrapper[4931]: I0130 05:28:13.913870 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018cf21e-c9c2-4ab4-8794-f6e066bafc86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.184006 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.248062 4931 generic.go:334] "Generic (PLEG): container finished" podID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerID="0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386" exitCode=0 Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.248146 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f76c52b2-cfad-4017-a265-142c8e1b54f9","Type":"ContainerDied","Data":"0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386"} Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.248178 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f76c52b2-cfad-4017-a265-142c8e1b54f9","Type":"ContainerDied","Data":"ce62ba9a7d987c86fe58643c3364d299b44cd5fdb6e5e34e5b8f029af4afb80d"} Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.248200 4931 scope.go:117] "RemoveContainer" containerID="0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.248343 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.250883 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"018cf21e-c9c2-4ab4-8794-f6e066bafc86","Type":"ContainerDied","Data":"f850561dfba344195d6aaf76e50b967d79d23fca8dbd9d77fa357655bcad14dc"} Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.251000 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.291029 4931 scope.go:117] "RemoveContainer" containerID="bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.305779 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.319974 4931 scope.go:117] "RemoveContainer" containerID="0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.320426 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9czc\" (UniqueName: \"kubernetes.io/projected/f76c52b2-cfad-4017-a265-142c8e1b54f9-kube-api-access-z9czc\") pod \"f76c52b2-cfad-4017-a265-142c8e1b54f9\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.320631 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76c52b2-cfad-4017-a265-142c8e1b54f9-logs\") pod \"f76c52b2-cfad-4017-a265-142c8e1b54f9\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.320719 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-combined-ca-bundle\") pod \"f76c52b2-cfad-4017-a265-142c8e1b54f9\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.320874 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-config-data\") pod \"f76c52b2-cfad-4017-a265-142c8e1b54f9\" (UID: \"f76c52b2-cfad-4017-a265-142c8e1b54f9\") " Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.321133 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76c52b2-cfad-4017-a265-142c8e1b54f9-logs" (OuterVolumeSpecName: "logs") pod "f76c52b2-cfad-4017-a265-142c8e1b54f9" (UID: "f76c52b2-cfad-4017-a265-142c8e1b54f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:28:14 crc kubenswrapper[4931]: E0130 05:28:14.320706 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386\": container with ID starting with 0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386 not found: ID does not exist" containerID="0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.321189 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386"} err="failed to get container status \"0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386\": rpc error: code = NotFound desc = could not find container \"0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386\": container with ID starting with 0191ef656c7021ce17be43d4051ed5a029ba9b9d53bf9beffcc79d9698624386 not found: ID does not exist" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.321218 4931 scope.go:117] "RemoveContainer" containerID="bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.321460 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76c52b2-cfad-4017-a265-142c8e1b54f9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.324942 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: E0130 05:28:14.326939 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c\": container with ID starting with bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c not found: ID does not exist" containerID="bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.326986 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c"} err="failed to get container status \"bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c\": rpc error: code = NotFound desc = could not find container \"bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c\": container with ID starting with bd57739faa4ffabdbe2aaad279c6333f7323539495e281e3c68aae4e3d40e10c not found: ID does not exist" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.327013 4931 scope.go:117] "RemoveContainer" containerID="56fd7ff0bc2b09ba7471742c19169b6e1897a3dc1f6e923f43d7c3edb6934d16" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.327470 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76c52b2-cfad-4017-a265-142c8e1b54f9-kube-api-access-z9czc" (OuterVolumeSpecName: "kube-api-access-z9czc") pod "f76c52b2-cfad-4017-a265-142c8e1b54f9" (UID: "f76c52b2-cfad-4017-a265-142c8e1b54f9"). InnerVolumeSpecName "kube-api-access-z9czc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.333829 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: E0130 05:28:14.334264 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerName="dnsmasq-dns" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334281 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerName="dnsmasq-dns" Jan 30 05:28:14 crc kubenswrapper[4931]: E0130 05:28:14.334297 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018cf21e-c9c2-4ab4-8794-f6e066bafc86" containerName="nova-scheduler-scheduler" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334304 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="018cf21e-c9c2-4ab4-8794-f6e066bafc86" containerName="nova-scheduler-scheduler" Jan 30 05:28:14 crc kubenswrapper[4931]: E0130 05:28:14.334322 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-api" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334329 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-api" Jan 30 05:28:14 crc kubenswrapper[4931]: E0130 05:28:14.334343 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-log" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334349 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-log" Jan 30 05:28:14 crc kubenswrapper[4931]: E0130 05:28:14.334361 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerName="init" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334367 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerName="init" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334555 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fe74d3-fa52-4814-8497-1a9bb9ea72ed" containerName="dnsmasq-dns" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334566 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-log" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334573 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="018cf21e-c9c2-4ab4-8794-f6e066bafc86" containerName="nova-scheduler-scheduler" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.334587 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" containerName="nova-api-api" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.335186 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.338422 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.341901 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.362005 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-config-data" (OuterVolumeSpecName: "config-data") pod "f76c52b2-cfad-4017-a265-142c8e1b54f9" (UID: "f76c52b2-cfad-4017-a265-142c8e1b54f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.376271 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f76c52b2-cfad-4017-a265-142c8e1b54f9" (UID: "f76c52b2-cfad-4017-a265-142c8e1b54f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.423167 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.423263 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-config-data\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.423372 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8p8\" (UniqueName: \"kubernetes.io/projected/f4be9b51-9e05-4080-9aac-1e7a68785e90-kube-api-access-dr8p8\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.423513 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.423538 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76c52b2-cfad-4017-a265-142c8e1b54f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.423549 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9czc\" (UniqueName: \"kubernetes.io/projected/f76c52b2-cfad-4017-a265-142c8e1b54f9-kube-api-access-z9czc\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.524946 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-config-data\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.525023 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8p8\" (UniqueName: \"kubernetes.io/projected/f4be9b51-9e05-4080-9aac-1e7a68785e90-kube-api-access-dr8p8\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.525096 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.529484 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.532540 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-config-data\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.544045 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8p8\" (UniqueName: \"kubernetes.io/projected/f4be9b51-9e05-4080-9aac-1e7a68785e90-kube-api-access-dr8p8\") pod \"nova-scheduler-0\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.660050 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.678156 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.689415 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.690848 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.696971 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.729792 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.731348 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.830486 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-config-data\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.830570 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.830625 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78469f1d-85e0-488f-9334-f756e0410bba-logs\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.830650 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvsb\" (UniqueName: \"kubernetes.io/projected/78469f1d-85e0-488f-9334-f756e0410bba-kube-api-access-tpvsb\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.932317 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-config-data\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.932392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.932508 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78469f1d-85e0-488f-9334-f756e0410bba-logs\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.932527 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpvsb\" (UniqueName: \"kubernetes.io/projected/78469f1d-85e0-488f-9334-f756e0410bba-kube-api-access-tpvsb\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.933800 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78469f1d-85e0-488f-9334-f756e0410bba-logs\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.937551 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-config-data\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.940489 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:14 crc kubenswrapper[4931]: I0130 05:28:14.952167 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpvsb\" (UniqueName: \"kubernetes.io/projected/78469f1d-85e0-488f-9334-f756e0410bba-kube-api-access-tpvsb\") pod \"nova-api-0\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " pod="openstack/nova-api-0" Jan 30 05:28:15 crc kubenswrapper[4931]: I0130 05:28:15.039243 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:15 crc kubenswrapper[4931]: I0130 05:28:15.217660 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:15 crc kubenswrapper[4931]: W0130 05:28:15.243685 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4be9b51_9e05_4080_9aac_1e7a68785e90.slice/crio-2dd5974fb15bfc06c0c4a9379a4055028a1a3284a814cf069766b15f03c9fc5e WatchSource:0}: Error finding container 2dd5974fb15bfc06c0c4a9379a4055028a1a3284a814cf069766b15f03c9fc5e: Status 404 returned error can't find the container with id 2dd5974fb15bfc06c0c4a9379a4055028a1a3284a814cf069766b15f03c9fc5e Jan 30 05:28:15 crc kubenswrapper[4931]: I0130 05:28:15.265382 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f4be9b51-9e05-4080-9aac-1e7a68785e90","Type":"ContainerStarted","Data":"2dd5974fb15bfc06c0c4a9379a4055028a1a3284a814cf069766b15f03c9fc5e"} Jan 30 05:28:15 crc kubenswrapper[4931]: I0130 05:28:15.431926 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018cf21e-c9c2-4ab4-8794-f6e066bafc86" path="/var/lib/kubelet/pods/018cf21e-c9c2-4ab4-8794-f6e066bafc86/volumes" Jan 30 05:28:15 crc kubenswrapper[4931]: I0130 05:28:15.432760 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76c52b2-cfad-4017-a265-142c8e1b54f9" path="/var/lib/kubelet/pods/f76c52b2-cfad-4017-a265-142c8e1b54f9/volumes" Jan 30 05:28:15 crc kubenswrapper[4931]: I0130 05:28:15.537513 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:16 crc kubenswrapper[4931]: I0130 05:28:16.277896 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78469f1d-85e0-488f-9334-f756e0410bba","Type":"ContainerStarted","Data":"e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244"} Jan 30 05:28:16 crc kubenswrapper[4931]: I0130 05:28:16.278221 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78469f1d-85e0-488f-9334-f756e0410bba","Type":"ContainerStarted","Data":"a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa"} Jan 30 05:28:16 crc kubenswrapper[4931]: I0130 05:28:16.278234 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78469f1d-85e0-488f-9334-f756e0410bba","Type":"ContainerStarted","Data":"4f8866ca081593d16859461ac5678f090e967b3446220506bb4f1ab22cb7f8fb"} Jan 30 05:28:16 crc kubenswrapper[4931]: I0130 05:28:16.279457 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f4be9b51-9e05-4080-9aac-1e7a68785e90","Type":"ContainerStarted","Data":"fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd"} Jan 30 05:28:16 crc kubenswrapper[4931]: I0130 05:28:16.302310 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.302293319 podStartE2EDuration="2.302293319s" podCreationTimestamp="2026-01-30 05:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:16.300804617 +0000 UTC m=+1231.670714874" watchObservedRunningTime="2026-01-30 05:28:16.302293319 +0000 UTC m=+1231.672203576" Jan 30 05:28:16 crc kubenswrapper[4931]: I0130 05:28:16.323082 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.323059182 podStartE2EDuration="2.323059182s" podCreationTimestamp="2026-01-30 05:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:16.32085785 +0000 UTC m=+1231.690768127" watchObservedRunningTime="2026-01-30 05:28:16.323059182 +0000 UTC m=+1231.692969449" Jan 30 05:28:18 crc kubenswrapper[4931]: I0130 05:28:18.642902 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 05:28:19 crc kubenswrapper[4931]: I0130 05:28:19.732248 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 05:28:24 crc kubenswrapper[4931]: I0130 05:28:24.732150 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 05:28:24 crc kubenswrapper[4931]: I0130 05:28:24.782452 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 05:28:25 crc kubenswrapper[4931]: I0130 05:28:25.040170 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:28:25 crc kubenswrapper[4931]: I0130 05:28:25.040281 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:28:25 crc kubenswrapper[4931]: I0130 05:28:25.439344 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 05:28:26 crc kubenswrapper[4931]: I0130 05:28:26.122887 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 05:28:26 crc kubenswrapper[4931]: I0130 05:28:26.123240 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.456155 4931 generic.go:334] "Generic (PLEG): container finished" podID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerID="5894edfe624e331ed6304798892205879e56e8c005b21dd019bad175228231c6" exitCode=137 Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.456187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e37fcb81-9df1-411b-b593-8ca56c518f33","Type":"ContainerDied","Data":"5894edfe624e331ed6304798892205879e56e8c005b21dd019bad175228231c6"} Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.459565 4931 generic.go:334] "Generic (PLEG): container finished" podID="f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" containerID="8389df76ed09df3262a31c554555764b5a96d65170cca0da5f3f94ed26654194" exitCode=137 Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.459605 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3","Type":"ContainerDied","Data":"8389df76ed09df3262a31c554555764b5a96d65170cca0da5f3f94ed26654194"} Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.616763 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.624710 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.797989 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-config-data\") pod \"e37fcb81-9df1-411b-b593-8ca56c518f33\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.798150 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m84h6\" (UniqueName: \"kubernetes.io/projected/e37fcb81-9df1-411b-b593-8ca56c518f33-kube-api-access-m84h6\") pod \"e37fcb81-9df1-411b-b593-8ca56c518f33\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.798245 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37fcb81-9df1-411b-b593-8ca56c518f33-logs\") pod \"e37fcb81-9df1-411b-b593-8ca56c518f33\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.798302 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-config-data\") pod \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.798373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-combined-ca-bundle\") pod \"e37fcb81-9df1-411b-b593-8ca56c518f33\" (UID: \"e37fcb81-9df1-411b-b593-8ca56c518f33\") " Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.798405 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-combined-ca-bundle\") pod \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.798500 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwsz9\" (UniqueName: \"kubernetes.io/projected/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-kube-api-access-mwsz9\") pod \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\" (UID: \"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3\") " Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.798904 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e37fcb81-9df1-411b-b593-8ca56c518f33-logs" (OuterVolumeSpecName: "logs") pod "e37fcb81-9df1-411b-b593-8ca56c518f33" (UID: "e37fcb81-9df1-411b-b593-8ca56c518f33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.799578 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e37fcb81-9df1-411b-b593-8ca56c518f33-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.810382 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37fcb81-9df1-411b-b593-8ca56c518f33-kube-api-access-m84h6" (OuterVolumeSpecName: "kube-api-access-m84h6") pod "e37fcb81-9df1-411b-b593-8ca56c518f33" (UID: "e37fcb81-9df1-411b-b593-8ca56c518f33"). InnerVolumeSpecName "kube-api-access-m84h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.810827 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-kube-api-access-mwsz9" (OuterVolumeSpecName: "kube-api-access-mwsz9") pod "f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" (UID: "f06a8661-ec14-48e2-a48b-2ecfec7b8ea3"). InnerVolumeSpecName "kube-api-access-mwsz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.830230 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" (UID: "f06a8661-ec14-48e2-a48b-2ecfec7b8ea3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.841961 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-config-data" (OuterVolumeSpecName: "config-data") pod "f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" (UID: "f06a8661-ec14-48e2-a48b-2ecfec7b8ea3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.867416 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-config-data" (OuterVolumeSpecName: "config-data") pod "e37fcb81-9df1-411b-b593-8ca56c518f33" (UID: "e37fcb81-9df1-411b-b593-8ca56c518f33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.869565 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e37fcb81-9df1-411b-b593-8ca56c518f33" (UID: "e37fcb81-9df1-411b-b593-8ca56c518f33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.901883 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.902096 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.902236 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwsz9\" (UniqueName: \"kubernetes.io/projected/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-kube-api-access-mwsz9\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.902389 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e37fcb81-9df1-411b-b593-8ca56c518f33-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.902617 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m84h6\" (UniqueName: \"kubernetes.io/projected/e37fcb81-9df1-411b-b593-8ca56c518f33-kube-api-access-m84h6\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:32 crc kubenswrapper[4931]: I0130 05:28:32.902748 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.478137 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e37fcb81-9df1-411b-b593-8ca56c518f33","Type":"ContainerDied","Data":"78bd5fff55edf34b6d1b969823b41d22ccbf3eabadd222063f311257dc45d17d"} Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.478220 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.478698 4931 scope.go:117] "RemoveContainer" containerID="5894edfe624e331ed6304798892205879e56e8c005b21dd019bad175228231c6" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.481976 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f06a8661-ec14-48e2-a48b-2ecfec7b8ea3","Type":"ContainerDied","Data":"77d36bb8a11804c505d48bebee2dbafaeb19326f0f1ced3b73af355d57d86b2b"} Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.482037 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.518893 4931 scope.go:117] "RemoveContainer" containerID="7b8d1ff499fbb8ebcbcbf87555e175867c57e89396d957fe7b851fe96dc36245" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.537848 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.565484 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.573978 4931 scope.go:117] "RemoveContainer" containerID="8389df76ed09df3262a31c554555764b5a96d65170cca0da5f3f94ed26654194" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.576730 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.587589 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.597641 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: E0130 05:28:33.598139 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-metadata" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.598165 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-metadata" Jan 30 05:28:33 crc kubenswrapper[4931]: E0130 05:28:33.598194 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-log" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.598208 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-log" Jan 30 05:28:33 crc kubenswrapper[4931]: E0130 05:28:33.598232 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.598245 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.598646 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.598689 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-log" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.598708 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" containerName="nova-metadata-metadata" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.599663 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.604708 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.606824 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.606832 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.608398 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.610116 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.612182 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.612187 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.616524 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.628227 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719038 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719123 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719309 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719570 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719622 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbcb\" (UniqueName: \"kubernetes.io/projected/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-kube-api-access-fhbcb\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719713 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvzsw\" (UniqueName: \"kubernetes.io/projected/2565fa42-f180-4948-8b2f-68c419d78d2b-kube-api-access-qvzsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719781 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-config-data\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.719948 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-logs\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.720011 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.720230 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.822234 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.822707 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.823051 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.823261 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.823475 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhbcb\" (UniqueName: \"kubernetes.io/projected/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-kube-api-access-fhbcb\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.823711 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.823976 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvzsw\" (UniqueName: \"kubernetes.io/projected/2565fa42-f180-4948-8b2f-68c419d78d2b-kube-api-access-qvzsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.824128 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-config-data\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.824308 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-logs\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.824453 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.824974 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-logs\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.830032 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.830483 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-config-data\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.831750 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.832943 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.832998 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.835228 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.835840 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.853121 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvzsw\" (UniqueName: \"kubernetes.io/projected/2565fa42-f180-4948-8b2f-68c419d78d2b-kube-api-access-qvzsw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.853339 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhbcb\" (UniqueName: \"kubernetes.io/projected/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-kube-api-access-fhbcb\") pod \"nova-metadata-0\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " pod="openstack/nova-metadata-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.958973 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:33 crc kubenswrapper[4931]: I0130 05:28:33.964652 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:34 crc kubenswrapper[4931]: I0130 05:28:34.304415 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:28:34 crc kubenswrapper[4931]: I0130 05:28:34.495915 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2565fa42-f180-4948-8b2f-68c419d78d2b","Type":"ContainerStarted","Data":"182ca03d45434848993e7087501801fbb8335a526ad9960a6da96e395124bc68"} Jan 30 05:28:34 crc kubenswrapper[4931]: I0130 05:28:34.591255 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.045623 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.046045 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.047091 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.051858 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.444271 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e37fcb81-9df1-411b-b593-8ca56c518f33" path="/var/lib/kubelet/pods/e37fcb81-9df1-411b-b593-8ca56c518f33/volumes" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.447618 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06a8661-ec14-48e2-a48b-2ecfec7b8ea3" path="/var/lib/kubelet/pods/f06a8661-ec14-48e2-a48b-2ecfec7b8ea3/volumes" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.516757 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873","Type":"ContainerStarted","Data":"c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322"} Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.516818 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873","Type":"ContainerStarted","Data":"df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e"} Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.516840 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873","Type":"ContainerStarted","Data":"ce7e3daf9b312cef06426fb53814cad3b92811315fdfd1796c0688baab6a72ed"} Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.523132 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2565fa42-f180-4948-8b2f-68c419d78d2b","Type":"ContainerStarted","Data":"4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe"} Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.523199 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.527917 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.561221 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.561199843 podStartE2EDuration="2.561199843s" podCreationTimestamp="2026-01-30 05:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:35.543507797 +0000 UTC m=+1250.913418104" watchObservedRunningTime="2026-01-30 05:28:35.561199843 +0000 UTC m=+1250.931110110" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.616312 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.616291889 podStartE2EDuration="2.616291889s" podCreationTimestamp="2026-01-30 05:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:35.596980968 +0000 UTC m=+1250.966891255" watchObservedRunningTime="2026-01-30 05:28:35.616291889 +0000 UTC m=+1250.986202156" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.731347 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-ctzjd"] Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.733326 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.762957 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-ctzjd"] Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.886164 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.886269 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.886306 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.886565 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-config\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.886687 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-svc\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.886959 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrnld\" (UniqueName: \"kubernetes.io/projected/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-kube-api-access-nrnld\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.988672 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.988723 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.988755 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-config\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.988801 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-svc\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.989017 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrnld\" (UniqueName: \"kubernetes.io/projected/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-kube-api-access-nrnld\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.989049 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.990440 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-config\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.990597 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.990721 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.990837 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-svc\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:35 crc kubenswrapper[4931]: I0130 05:28:35.990950 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:36 crc kubenswrapper[4931]: I0130 05:28:36.014066 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrnld\" (UniqueName: \"kubernetes.io/projected/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-kube-api-access-nrnld\") pod \"dnsmasq-dns-5ddd577785-ctzjd\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:36 crc kubenswrapper[4931]: I0130 05:28:36.076767 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:36 crc kubenswrapper[4931]: I0130 05:28:36.617371 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-ctzjd"] Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.495548 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.496207 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-central-agent" containerID="cri-o://782c5f8dbe0e7576669ed328ec36323e5d152ca2c37c77db802604122975e914" gracePeriod=30 Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.496775 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="proxy-httpd" containerID="cri-o://149c802638d11431261e2009e655ac397a1354f084b06d7f2da2c77118f48d07" gracePeriod=30 Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.496850 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="sg-core" containerID="cri-o://37cec202033bdc3e70a415b285410f6ce8158b9541b9d20f36bd938b5978559c" gracePeriod=30 Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.496971 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-notification-agent" containerID="cri-o://89d3e2e5267fbad75d566e6ba9ac104cbc8326782fb62f0bd4ed8c4f9b169c76" gracePeriod=30 Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.539008 4931 generic.go:334] "Generic (PLEG): container finished" podID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerID="d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2" exitCode=0 Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.540059 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" event={"ID":"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d","Type":"ContainerDied","Data":"d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2"} Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.540089 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" event={"ID":"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d","Type":"ContainerStarted","Data":"645723e127490c600cf593cc161f0207c0a197195fa54096da51b7634ddd33ac"} Jan 30 05:28:37 crc kubenswrapper[4931]: I0130 05:28:37.816273 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.564450 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" event={"ID":"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d","Type":"ContainerStarted","Data":"e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e"} Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.564573 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.569755 4931 generic.go:334] "Generic (PLEG): container finished" podID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerID="149c802638d11431261e2009e655ac397a1354f084b06d7f2da2c77118f48d07" exitCode=0 Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.569787 4931 generic.go:334] "Generic (PLEG): container finished" podID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerID="37cec202033bdc3e70a415b285410f6ce8158b9541b9d20f36bd938b5978559c" exitCode=2 Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.569800 4931 generic.go:334] "Generic (PLEG): container finished" podID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerID="782c5f8dbe0e7576669ed328ec36323e5d152ca2c37c77db802604122975e914" exitCode=0 Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.569974 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-log" containerID="cri-o://a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa" gracePeriod=30 Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.570227 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerDied","Data":"149c802638d11431261e2009e655ac397a1354f084b06d7f2da2c77118f48d07"} Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.570266 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerDied","Data":"37cec202033bdc3e70a415b285410f6ce8158b9541b9d20f36bd938b5978559c"} Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.570282 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerDied","Data":"782c5f8dbe0e7576669ed328ec36323e5d152ca2c37c77db802604122975e914"} Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.570328 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-api" containerID="cri-o://e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244" gracePeriod=30 Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.613246 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" podStartSLOduration=3.6132185039999998 podStartE2EDuration="3.613218504s" podCreationTimestamp="2026-01-30 05:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:38.601103884 +0000 UTC m=+1253.971014141" watchObservedRunningTime="2026-01-30 05:28:38.613218504 +0000 UTC m=+1253.983128791" Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.959833 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.965746 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 05:28:38 crc kubenswrapper[4931]: I0130 05:28:38.965820 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 05:28:39 crc kubenswrapper[4931]: I0130 05:28:39.582615 4931 generic.go:334] "Generic (PLEG): container finished" podID="78469f1d-85e0-488f-9334-f756e0410bba" containerID="a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa" exitCode=143 Jan 30 05:28:39 crc kubenswrapper[4931]: I0130 05:28:39.582678 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78469f1d-85e0-488f-9334-f756e0410bba","Type":"ContainerDied","Data":"a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa"} Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.165010 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.322789 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78469f1d-85e0-488f-9334-f756e0410bba-logs\") pod \"78469f1d-85e0-488f-9334-f756e0410bba\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.323671 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78469f1d-85e0-488f-9334-f756e0410bba-logs" (OuterVolumeSpecName: "logs") pod "78469f1d-85e0-488f-9334-f756e0410bba" (UID: "78469f1d-85e0-488f-9334-f756e0410bba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.323995 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-combined-ca-bundle\") pod \"78469f1d-85e0-488f-9334-f756e0410bba\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.324857 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-config-data\") pod \"78469f1d-85e0-488f-9334-f756e0410bba\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.324928 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpvsb\" (UniqueName: \"kubernetes.io/projected/78469f1d-85e0-488f-9334-f756e0410bba-kube-api-access-tpvsb\") pod \"78469f1d-85e0-488f-9334-f756e0410bba\" (UID: \"78469f1d-85e0-488f-9334-f756e0410bba\") " Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.325882 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78469f1d-85e0-488f-9334-f756e0410bba-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.329894 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78469f1d-85e0-488f-9334-f756e0410bba-kube-api-access-tpvsb" (OuterVolumeSpecName: "kube-api-access-tpvsb") pod "78469f1d-85e0-488f-9334-f756e0410bba" (UID: "78469f1d-85e0-488f-9334-f756e0410bba"). InnerVolumeSpecName "kube-api-access-tpvsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.367205 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-config-data" (OuterVolumeSpecName: "config-data") pod "78469f1d-85e0-488f-9334-f756e0410bba" (UID: "78469f1d-85e0-488f-9334-f756e0410bba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.370615 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78469f1d-85e0-488f-9334-f756e0410bba" (UID: "78469f1d-85e0-488f-9334-f756e0410bba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.427687 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.427727 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78469f1d-85e0-488f-9334-f756e0410bba-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.427740 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpvsb\" (UniqueName: \"kubernetes.io/projected/78469f1d-85e0-488f-9334-f756e0410bba-kube-api-access-tpvsb\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.632110 4931 generic.go:334] "Generic (PLEG): container finished" podID="78469f1d-85e0-488f-9334-f756e0410bba" containerID="e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244" exitCode=0 Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.632149 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.632171 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78469f1d-85e0-488f-9334-f756e0410bba","Type":"ContainerDied","Data":"e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244"} Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.632620 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"78469f1d-85e0-488f-9334-f756e0410bba","Type":"ContainerDied","Data":"4f8866ca081593d16859461ac5678f090e967b3446220506bb4f1ab22cb7f8fb"} Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.632642 4931 scope.go:117] "RemoveContainer" containerID="e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.670103 4931 scope.go:117] "RemoveContainer" containerID="a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.692194 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.695501 4931 scope.go:117] "RemoveContainer" containerID="e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244" Jan 30 05:28:42 crc kubenswrapper[4931]: E0130 05:28:42.696083 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244\": container with ID starting with e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244 not found: ID does not exist" containerID="e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.696131 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244"} err="failed to get container status \"e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244\": rpc error: code = NotFound desc = could not find container \"e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244\": container with ID starting with e65906c2e18f99d1ae8f7979b57948a0712e252ccec84428baf7905f59be9244 not found: ID does not exist" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.696172 4931 scope.go:117] "RemoveContainer" containerID="a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa" Jan 30 05:28:42 crc kubenswrapper[4931]: E0130 05:28:42.696627 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa\": container with ID starting with a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa not found: ID does not exist" containerID="a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.696660 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa"} err="failed to get container status \"a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa\": rpc error: code = NotFound desc = could not find container \"a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa\": container with ID starting with a86dbb9777f83307aa7e0153e43a1930d04e2a1808d8f994eb6f9cfbc049e3fa not found: ID does not exist" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.705821 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.714334 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:42 crc kubenswrapper[4931]: E0130 05:28:42.714822 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-api" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.714847 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-api" Jan 30 05:28:42 crc kubenswrapper[4931]: E0130 05:28:42.714880 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-log" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.714889 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-log" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.715106 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-log" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.715134 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="78469f1d-85e0-488f-9334-f756e0410bba" containerName="nova-api-api" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.716290 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.719699 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.719743 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.719882 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.726917 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.833384 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-public-tls-certs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.833945 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrpv\" (UniqueName: \"kubernetes.io/projected/84172ea2-ea94-454e-a247-3388dbd3f559-kube-api-access-2vrpv\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.834060 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-config-data\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.834130 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.834246 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.834592 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84172ea2-ea94-454e-a247-3388dbd3f559-logs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.937925 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-public-tls-certs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.938016 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrpv\" (UniqueName: \"kubernetes.io/projected/84172ea2-ea94-454e-a247-3388dbd3f559-kube-api-access-2vrpv\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.938055 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-config-data\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.938083 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.938116 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.938178 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84172ea2-ea94-454e-a247-3388dbd3f559-logs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.938659 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84172ea2-ea94-454e-a247-3388dbd3f559-logs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.942025 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-config-data\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.943258 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.943870 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-public-tls-certs\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.950822 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:42 crc kubenswrapper[4931]: I0130 05:28:42.953365 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrpv\" (UniqueName: \"kubernetes.io/projected/84172ea2-ea94-454e-a247-3388dbd3f559-kube-api-access-2vrpv\") pod \"nova-api-0\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " pod="openstack/nova-api-0" Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.039600 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.438019 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78469f1d-85e0-488f-9334-f756e0410bba" path="/var/lib/kubelet/pods/78469f1d-85e0-488f-9334-f756e0410bba/volumes" Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.608643 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.645695 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84172ea2-ea94-454e-a247-3388dbd3f559","Type":"ContainerStarted","Data":"5def1c57849cb63c69d783f2aabd654a2c46a9078012cc3f016fb655404a7738"} Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.959670 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.965726 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.965779 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 05:28:43 crc kubenswrapper[4931]: I0130 05:28:43.986686 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.661672 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84172ea2-ea94-454e-a247-3388dbd3f559","Type":"ContainerStarted","Data":"32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61"} Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.662117 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84172ea2-ea94-454e-a247-3388dbd3f559","Type":"ContainerStarted","Data":"d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8"} Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.701973 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.701942326 podStartE2EDuration="2.701942326s" podCreationTimestamp="2026-01-30 05:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:44.683328683 +0000 UTC m=+1260.053238970" watchObservedRunningTime="2026-01-30 05:28:44.701942326 +0000 UTC m=+1260.071852623" Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.707191 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.935374 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tjkcd"] Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.936809 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.942704 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.943371 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.944235 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tjkcd"] Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.978560 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 05:28:44 crc kubenswrapper[4931]: I0130 05:28:44.978609 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.085519 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-config-data\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.085624 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.085716 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwjb\" (UniqueName: \"kubernetes.io/projected/fac7a7da-7577-4269-8e37-fd964be6f75c-kube-api-access-bqwjb\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.085745 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-scripts\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.187811 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwjb\" (UniqueName: \"kubernetes.io/projected/fac7a7da-7577-4269-8e37-fd964be6f75c-kube-api-access-bqwjb\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.187890 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-scripts\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.187997 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-config-data\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.188117 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.197408 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-config-data\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.199215 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-scripts\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.211047 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.215148 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwjb\" (UniqueName: \"kubernetes.io/projected/fac7a7da-7577-4269-8e37-fd964be6f75c-kube-api-access-bqwjb\") pod \"nova-cell1-cell-mapping-tjkcd\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.256611 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:45 crc kubenswrapper[4931]: I0130 05:28:45.715422 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tjkcd"] Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.078751 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.190839 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-p25hj"] Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.191898 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" podUID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerName="dnsmasq-dns" containerID="cri-o://a729151ede12640ea81c41f5d7f2d36efd861e7a4d31b991fe42dd4d2139fbe2" gracePeriod=10 Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.687723 4931 generic.go:334] "Generic (PLEG): container finished" podID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerID="a729151ede12640ea81c41f5d7f2d36efd861e7a4d31b991fe42dd4d2139fbe2" exitCode=0 Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.688020 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" event={"ID":"4dc14ae7-f05f-4093-838b-bdd419f4302f","Type":"ContainerDied","Data":"a729151ede12640ea81c41f5d7f2d36efd861e7a4d31b991fe42dd4d2139fbe2"} Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.688047 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" event={"ID":"4dc14ae7-f05f-4093-838b-bdd419f4302f","Type":"ContainerDied","Data":"8d7831c7e52a58c64227ffa0e77bd37bd5e37f6fcdda5357867d37c004b57577"} Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.688056 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d7831c7e52a58c64227ffa0e77bd37bd5e37f6fcdda5357867d37c004b57577" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.691580 4931 generic.go:334] "Generic (PLEG): container finished" podID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerID="89d3e2e5267fbad75d566e6ba9ac104cbc8326782fb62f0bd4ed8c4f9b169c76" exitCode=0 Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.691700 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerDied","Data":"89d3e2e5267fbad75d566e6ba9ac104cbc8326782fb62f0bd4ed8c4f9b169c76"} Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.694435 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tjkcd" event={"ID":"fac7a7da-7577-4269-8e37-fd964be6f75c","Type":"ContainerStarted","Data":"a32ca29963fb38a6014b8500b2c2495801c36c3f1563f62dcc7d71405aa5c328"} Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.694478 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tjkcd" event={"ID":"fac7a7da-7577-4269-8e37-fd964be6f75c","Type":"ContainerStarted","Data":"12bb0a37bfd07d7c0922ec928aebe0dc629e0ea396fd8488eb5f2642c0f2d238"} Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.696416 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.704468 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.714581 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tjkcd" podStartSLOduration=2.714564247 podStartE2EDuration="2.714564247s" podCreationTimestamp="2026-01-30 05:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:46.712261512 +0000 UTC m=+1262.082171769" watchObservedRunningTime="2026-01-30 05:28:46.714564247 +0000 UTC m=+1262.084474494" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.830999 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-svc\") pod \"4dc14ae7-f05f-4093-838b-bdd419f4302f\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831158 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-log-httpd\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831190 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-sb\") pod \"4dc14ae7-f05f-4093-838b-bdd419f4302f\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831214 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-run-httpd\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831239 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-config\") pod \"4dc14ae7-f05f-4093-838b-bdd419f4302f\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831275 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-ceilometer-tls-certs\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xw72\" (UniqueName: \"kubernetes.io/projected/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-kube-api-access-9xw72\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831344 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-combined-ca-bundle\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831367 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-swift-storage-0\") pod \"4dc14ae7-f05f-4093-838b-bdd419f4302f\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831385 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-scripts\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831415 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-config-data\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831513 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-sg-core-conf-yaml\") pod \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\" (UID: \"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831529 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftvmx\" (UniqueName: \"kubernetes.io/projected/4dc14ae7-f05f-4093-838b-bdd419f4302f-kube-api-access-ftvmx\") pod \"4dc14ae7-f05f-4093-838b-bdd419f4302f\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.831556 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-nb\") pod \"4dc14ae7-f05f-4093-838b-bdd419f4302f\" (UID: \"4dc14ae7-f05f-4093-838b-bdd419f4302f\") " Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.832333 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.832884 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.836599 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-kube-api-access-9xw72" (OuterVolumeSpecName: "kube-api-access-9xw72") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "kube-api-access-9xw72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.838537 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dc14ae7-f05f-4093-838b-bdd419f4302f-kube-api-access-ftvmx" (OuterVolumeSpecName: "kube-api-access-ftvmx") pod "4dc14ae7-f05f-4093-838b-bdd419f4302f" (UID: "4dc14ae7-f05f-4093-838b-bdd419f4302f"). InnerVolumeSpecName "kube-api-access-ftvmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.841783 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-scripts" (OuterVolumeSpecName: "scripts") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.910608 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.917807 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4dc14ae7-f05f-4093-838b-bdd419f4302f" (UID: "4dc14ae7-f05f-4093-838b-bdd419f4302f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.933514 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.933539 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.933549 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xw72\" (UniqueName: \"kubernetes.io/projected/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-kube-api-access-9xw72\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.933559 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.933567 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.933575 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.933584 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftvmx\" (UniqueName: \"kubernetes.io/projected/4dc14ae7-f05f-4093-838b-bdd419f4302f-kube-api-access-ftvmx\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.940802 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.942543 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dc14ae7-f05f-4093-838b-bdd419f4302f" (UID: "4dc14ae7-f05f-4093-838b-bdd419f4302f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.948201 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-config" (OuterVolumeSpecName: "config") pod "4dc14ae7-f05f-4093-838b-bdd419f4302f" (UID: "4dc14ae7-f05f-4093-838b-bdd419f4302f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.950028 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dc14ae7-f05f-4093-838b-bdd419f4302f" (UID: "4dc14ae7-f05f-4093-838b-bdd419f4302f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.968795 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.980819 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dc14ae7-f05f-4093-838b-bdd419f4302f" (UID: "4dc14ae7-f05f-4093-838b-bdd419f4302f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:28:46 crc kubenswrapper[4931]: I0130 05:28:46.983698 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-config-data" (OuterVolumeSpecName: "config-data") pod "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" (UID: "9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.035793 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.035834 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.035848 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.035862 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.035874 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.035885 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc14ae7-f05f-4093-838b-bdd419f4302f-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.035896 4931 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.711211 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7","Type":"ContainerDied","Data":"011e359b19d615fee43023e3c1e45b97d099d18b94cb63d33b5905aec3a68e27"} Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.711277 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-p25hj" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.711301 4931 scope.go:117] "RemoveContainer" containerID="149c802638d11431261e2009e655ac397a1354f084b06d7f2da2c77118f48d07" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.711605 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.760401 4931 scope.go:117] "RemoveContainer" containerID="37cec202033bdc3e70a415b285410f6ce8158b9541b9d20f36bd938b5978559c" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.784220 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-p25hj"] Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.795972 4931 scope.go:117] "RemoveContainer" containerID="89d3e2e5267fbad75d566e6ba9ac104cbc8326782fb62f0bd4ed8c4f9b169c76" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.802413 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-p25hj"] Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.820531 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.863036 4931 scope.go:117] "RemoveContainer" containerID="782c5f8dbe0e7576669ed328ec36323e5d152ca2c37c77db802604122975e914" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.873320 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.892386 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:28:47 crc kubenswrapper[4931]: E0130 05:28:47.892820 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerName="init" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.892844 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerName="init" Jan 30 05:28:47 crc kubenswrapper[4931]: E0130 05:28:47.892860 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerName="dnsmasq-dns" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.892869 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerName="dnsmasq-dns" Jan 30 05:28:47 crc kubenswrapper[4931]: E0130 05:28:47.892881 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-central-agent" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.892890 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-central-agent" Jan 30 05:28:47 crc kubenswrapper[4931]: E0130 05:28:47.892907 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="sg-core" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.892914 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="sg-core" Jan 30 05:28:47 crc kubenswrapper[4931]: E0130 05:28:47.892967 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="proxy-httpd" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.892977 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="proxy-httpd" Jan 30 05:28:47 crc kubenswrapper[4931]: E0130 05:28:47.892997 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-notification-agent" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.893005 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-notification-agent" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.893306 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-notification-agent" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.893324 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="proxy-httpd" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.893344 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dc14ae7-f05f-4093-838b-bdd419f4302f" containerName="dnsmasq-dns" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.893365 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="ceilometer-central-agent" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.893378 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" containerName="sg-core" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.895743 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.898653 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.898826 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.900440 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 05:28:47 crc kubenswrapper[4931]: I0130 05:28:47.905243 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.063736 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-scripts\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.063840 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-run-httpd\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.063878 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.063905 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.063938 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.063960 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-log-httpd\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.064144 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-config-data\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.064248 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4df4\" (UniqueName: \"kubernetes.io/projected/cacfcbd5-8c12-4fc5-88ce-516fda23464d-kube-api-access-q4df4\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.166697 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.167103 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.167297 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.167510 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-log-httpd\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.167804 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-config-data\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.168679 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4df4\" (UniqueName: \"kubernetes.io/projected/cacfcbd5-8c12-4fc5-88ce-516fda23464d-kube-api-access-q4df4\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.169028 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-scripts\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.169256 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-run-httpd\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.168065 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-log-httpd\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.169913 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-run-httpd\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.174816 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.176565 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-scripts\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.177103 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-config-data\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.179461 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.182122 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.191504 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4df4\" (UniqueName: \"kubernetes.io/projected/cacfcbd5-8c12-4fc5-88ce-516fda23464d-kube-api-access-q4df4\") pod \"ceilometer-0\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.217682 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:28:48 crc kubenswrapper[4931]: I0130 05:28:48.728625 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:28:49 crc kubenswrapper[4931]: I0130 05:28:49.444251 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dc14ae7-f05f-4093-838b-bdd419f4302f" path="/var/lib/kubelet/pods/4dc14ae7-f05f-4093-838b-bdd419f4302f/volumes" Jan 30 05:28:49 crc kubenswrapper[4931]: I0130 05:28:49.446457 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7" path="/var/lib/kubelet/pods/9bf8c845-e69c-41e6-9e59-4b9b9fceaaf7/volumes" Jan 30 05:28:49 crc kubenswrapper[4931]: I0130 05:28:49.753784 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerStarted","Data":"9186f065f875b33ba8c3817977c815fff7b67dc7404ddb06ea6a85fb64800755"} Jan 30 05:28:49 crc kubenswrapper[4931]: I0130 05:28:49.754108 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerStarted","Data":"99a1153c1cd92ab2a34d0651a54dd16cc1116a03a4d5c96b1f4e7e5abbde1e2d"} Jan 30 05:28:50 crc kubenswrapper[4931]: I0130 05:28:50.763208 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerStarted","Data":"0f6848e1ccd25c33da13cfce62f451555f794b35623ff124d320281a39cb9911"} Jan 30 05:28:50 crc kubenswrapper[4931]: I0130 05:28:50.763521 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerStarted","Data":"62da5f526098b3b9f5437a81119156f87878963a1c44c026236d9b63e20bbac5"} Jan 30 05:28:50 crc kubenswrapper[4931]: I0130 05:28:50.766137 4931 generic.go:334] "Generic (PLEG): container finished" podID="fac7a7da-7577-4269-8e37-fd964be6f75c" containerID="a32ca29963fb38a6014b8500b2c2495801c36c3f1563f62dcc7d71405aa5c328" exitCode=0 Jan 30 05:28:50 crc kubenswrapper[4931]: I0130 05:28:50.766180 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tjkcd" event={"ID":"fac7a7da-7577-4269-8e37-fd964be6f75c","Type":"ContainerDied","Data":"a32ca29963fb38a6014b8500b2c2495801c36c3f1563f62dcc7d71405aa5c328"} Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.188029 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.350641 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-scripts\") pod \"fac7a7da-7577-4269-8e37-fd964be6f75c\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.350902 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-combined-ca-bundle\") pod \"fac7a7da-7577-4269-8e37-fd964be6f75c\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.350998 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqwjb\" (UniqueName: \"kubernetes.io/projected/fac7a7da-7577-4269-8e37-fd964be6f75c-kube-api-access-bqwjb\") pod \"fac7a7da-7577-4269-8e37-fd964be6f75c\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.351149 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-config-data\") pod \"fac7a7da-7577-4269-8e37-fd964be6f75c\" (UID: \"fac7a7da-7577-4269-8e37-fd964be6f75c\") " Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.357802 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-scripts" (OuterVolumeSpecName: "scripts") pod "fac7a7da-7577-4269-8e37-fd964be6f75c" (UID: "fac7a7da-7577-4269-8e37-fd964be6f75c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.363819 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac7a7da-7577-4269-8e37-fd964be6f75c-kube-api-access-bqwjb" (OuterVolumeSpecName: "kube-api-access-bqwjb") pod "fac7a7da-7577-4269-8e37-fd964be6f75c" (UID: "fac7a7da-7577-4269-8e37-fd964be6f75c"). InnerVolumeSpecName "kube-api-access-bqwjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.402480 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-config-data" (OuterVolumeSpecName: "config-data") pod "fac7a7da-7577-4269-8e37-fd964be6f75c" (UID: "fac7a7da-7577-4269-8e37-fd964be6f75c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.409238 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fac7a7da-7577-4269-8e37-fd964be6f75c" (UID: "fac7a7da-7577-4269-8e37-fd964be6f75c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.455179 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqwjb\" (UniqueName: \"kubernetes.io/projected/fac7a7da-7577-4269-8e37-fd964be6f75c-kube-api-access-bqwjb\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.455798 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.455831 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.455861 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac7a7da-7577-4269-8e37-fd964be6f75c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.794608 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tjkcd" event={"ID":"fac7a7da-7577-4269-8e37-fd964be6f75c","Type":"ContainerDied","Data":"12bb0a37bfd07d7c0922ec928aebe0dc629e0ea396fd8488eb5f2642c0f2d238"} Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.794663 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12bb0a37bfd07d7c0922ec928aebe0dc629e0ea396fd8488eb5f2642c0f2d238" Jan 30 05:28:52 crc kubenswrapper[4931]: I0130 05:28:52.794672 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tjkcd" Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.040781 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.040823 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.065842 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.066104 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f4be9b51-9e05-4080-9aac-1e7a68785e90" containerName="nova-scheduler-scheduler" containerID="cri-o://fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" gracePeriod=30 Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.086194 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.094461 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.094726 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-log" containerID="cri-o://df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e" gracePeriod=30 Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.094871 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-metadata" containerID="cri-o://c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322" gracePeriod=30 Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.804890 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerStarted","Data":"25cc12087ab98d0fc79e679c4de5be61f557329f293a5a68393ba8b20a57c428"} Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.805330 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.808098 4931 generic.go:334] "Generic (PLEG): container finished" podID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerID="df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e" exitCode=143 Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.808199 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873","Type":"ContainerDied","Data":"df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e"} Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.808340 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-log" containerID="cri-o://d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8" gracePeriod=30 Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.808375 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-api" containerID="cri-o://32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61" gracePeriod=30 Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.815653 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": EOF" Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.815832 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": EOF" Jan 30 05:28:53 crc kubenswrapper[4931]: I0130 05:28:53.842517 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.970462409 podStartE2EDuration="6.842494502s" podCreationTimestamp="2026-01-30 05:28:47 +0000 UTC" firstStartedPulling="2026-01-30 05:28:48.739383351 +0000 UTC m=+1264.109293618" lastFinishedPulling="2026-01-30 05:28:52.611415414 +0000 UTC m=+1267.981325711" observedRunningTime="2026-01-30 05:28:53.838170561 +0000 UTC m=+1269.208080828" watchObservedRunningTime="2026-01-30 05:28:53.842494502 +0000 UTC m=+1269.212404759" Jan 30 05:28:54 crc kubenswrapper[4931]: E0130 05:28:54.734486 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:28:54 crc kubenswrapper[4931]: E0130 05:28:54.736165 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:28:54 crc kubenswrapper[4931]: E0130 05:28:54.738075 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:28:54 crc kubenswrapper[4931]: E0130 05:28:54.738104 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f4be9b51-9e05-4080-9aac-1e7a68785e90" containerName="nova-scheduler-scheduler" Jan 30 05:28:54 crc kubenswrapper[4931]: I0130 05:28:54.834072 4931 generic.go:334] "Generic (PLEG): container finished" podID="84172ea2-ea94-454e-a247-3388dbd3f559" containerID="d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8" exitCode=143 Jan 30 05:28:54 crc kubenswrapper[4931]: I0130 05:28:54.834123 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84172ea2-ea94-454e-a247-3388dbd3f559","Type":"ContainerDied","Data":"d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8"} Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.806336 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.864843 4931 generic.go:334] "Generic (PLEG): container finished" podID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerID="c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322" exitCode=0 Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.864892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873","Type":"ContainerDied","Data":"c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322"} Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.864921 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873","Type":"ContainerDied","Data":"ce7e3daf9b312cef06426fb53814cad3b92811315fdfd1796c0688baab6a72ed"} Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.864942 4931 scope.go:117] "RemoveContainer" containerID="c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.864998 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.904102 4931 scope.go:117] "RemoveContainer" containerID="df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.925607 4931 scope.go:117] "RemoveContainer" containerID="c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322" Jan 30 05:28:56 crc kubenswrapper[4931]: E0130 05:28:56.926307 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322\": container with ID starting with c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322 not found: ID does not exist" containerID="c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.926347 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322"} err="failed to get container status \"c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322\": rpc error: code = NotFound desc = could not find container \"c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322\": container with ID starting with c1b78b938372c88282024e91925a759a66838d2af03e52c363529c73f9f70322 not found: ID does not exist" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.926408 4931 scope.go:117] "RemoveContainer" containerID="df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e" Jan 30 05:28:56 crc kubenswrapper[4931]: E0130 05:28:56.926836 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e\": container with ID starting with df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e not found: ID does not exist" containerID="df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.926866 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e"} err="failed to get container status \"df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e\": rpc error: code = NotFound desc = could not find container \"df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e\": container with ID starting with df793b7adeadd15a2ed69b8d798d6152c6ea016bd80ec1e05b3e19a35a19ae6e not found: ID does not exist" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.960036 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-config-data\") pod \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.960121 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-nova-metadata-tls-certs\") pod \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.960173 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-logs\") pod \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.960342 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-combined-ca-bundle\") pod \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.960584 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhbcb\" (UniqueName: \"kubernetes.io/projected/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-kube-api-access-fhbcb\") pod \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\" (UID: \"0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873\") " Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.961147 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-logs" (OuterVolumeSpecName: "logs") pod "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" (UID: "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.962087 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:56 crc kubenswrapper[4931]: I0130 05:28:56.968739 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-kube-api-access-fhbcb" (OuterVolumeSpecName: "kube-api-access-fhbcb") pod "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" (UID: "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873"). InnerVolumeSpecName "kube-api-access-fhbcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.005966 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" (UID: "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.031209 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-config-data" (OuterVolumeSpecName: "config-data") pod "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" (UID: "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.042789 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" (UID: "0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.065291 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhbcb\" (UniqueName: \"kubernetes.io/projected/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-kube-api-access-fhbcb\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.065330 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.065346 4931 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.065359 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.240558 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.263853 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.274947 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:57 crc kubenswrapper[4931]: E0130 05:28:57.275465 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac7a7da-7577-4269-8e37-fd964be6f75c" containerName="nova-manage" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.275490 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac7a7da-7577-4269-8e37-fd964be6f75c" containerName="nova-manage" Jan 30 05:28:57 crc kubenswrapper[4931]: E0130 05:28:57.275521 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-metadata" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.275530 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-metadata" Jan 30 05:28:57 crc kubenswrapper[4931]: E0130 05:28:57.275549 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-log" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.275558 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-log" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.275798 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-metadata" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.275827 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac7a7da-7577-4269-8e37-fd964be6f75c" containerName="nova-manage" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.275844 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" containerName="nova-metadata-log" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.277406 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.280396 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.280708 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.283041 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.363345 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.363465 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.375584 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8vz\" (UniqueName: \"kubernetes.io/projected/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-kube-api-access-pc8vz\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.375729 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-logs\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.375771 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-config-data\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.375919 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.376061 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.440039 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873" path="/var/lib/kubelet/pods/0c0f5a0d-26b4-49ab-8bca-8a29ae2d5873/volumes" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.478386 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc8vz\" (UniqueName: \"kubernetes.io/projected/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-kube-api-access-pc8vz\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.478568 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-logs\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.478738 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-config-data\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.478835 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.478909 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.479859 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-logs\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.485626 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.485752 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-config-data\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.489291 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.507882 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc8vz\" (UniqueName: \"kubernetes.io/projected/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-kube-api-access-pc8vz\") pod \"nova-metadata-0\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " pod="openstack/nova-metadata-0" Jan 30 05:28:57 crc kubenswrapper[4931]: I0130 05:28:57.594306 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:28:58 crc kubenswrapper[4931]: W0130 05:28:58.123607 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4e6d6a8_599b_4ab9_b1f7_cf521e455d74.slice/crio-575ed258be47595d56c46486254ee62d83c72e80bf57019828419331f46802a7 WatchSource:0}: Error finding container 575ed258be47595d56c46486254ee62d83c72e80bf57019828419331f46802a7: Status 404 returned error can't find the container with id 575ed258be47595d56c46486254ee62d83c72e80bf57019828419331f46802a7 Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.133572 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.736992 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.807396 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-config-data\") pod \"f4be9b51-9e05-4080-9aac-1e7a68785e90\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.807887 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-combined-ca-bundle\") pod \"f4be9b51-9e05-4080-9aac-1e7a68785e90\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.807929 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr8p8\" (UniqueName: \"kubernetes.io/projected/f4be9b51-9e05-4080-9aac-1e7a68785e90-kube-api-access-dr8p8\") pod \"f4be9b51-9e05-4080-9aac-1e7a68785e90\" (UID: \"f4be9b51-9e05-4080-9aac-1e7a68785e90\") " Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.812226 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4be9b51-9e05-4080-9aac-1e7a68785e90-kube-api-access-dr8p8" (OuterVolumeSpecName: "kube-api-access-dr8p8") pod "f4be9b51-9e05-4080-9aac-1e7a68785e90" (UID: "f4be9b51-9e05-4080-9aac-1e7a68785e90"). InnerVolumeSpecName "kube-api-access-dr8p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.843447 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4be9b51-9e05-4080-9aac-1e7a68785e90" (UID: "f4be9b51-9e05-4080-9aac-1e7a68785e90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.846872 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-config-data" (OuterVolumeSpecName: "config-data") pod "f4be9b51-9e05-4080-9aac-1e7a68785e90" (UID: "f4be9b51-9e05-4080-9aac-1e7a68785e90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.884712 4931 generic.go:334] "Generic (PLEG): container finished" podID="f4be9b51-9e05-4080-9aac-1e7a68785e90" containerID="fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" exitCode=0 Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.884775 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f4be9b51-9e05-4080-9aac-1e7a68785e90","Type":"ContainerDied","Data":"fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd"} Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.884806 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f4be9b51-9e05-4080-9aac-1e7a68785e90","Type":"ContainerDied","Data":"2dd5974fb15bfc06c0c4a9379a4055028a1a3284a814cf069766b15f03c9fc5e"} Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.884826 4931 scope.go:117] "RemoveContainer" containerID="fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.884948 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.888281 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74","Type":"ContainerStarted","Data":"6c90254ae67ae50ab19fa555ce55d1839d94322882a09bb91ad616b62efcfeeb"} Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.888317 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74","Type":"ContainerStarted","Data":"a268ff4ead170d4fc7c25a89e846ed2d0f10278b94da4082529cc4ebd9ab4f0e"} Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.888329 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74","Type":"ContainerStarted","Data":"575ed258be47595d56c46486254ee62d83c72e80bf57019828419331f46802a7"} Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.909842 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.909894 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4be9b51-9e05-4080-9aac-1e7a68785e90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.909921 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr8p8\" (UniqueName: \"kubernetes.io/projected/f4be9b51-9e05-4080-9aac-1e7a68785e90-kube-api-access-dr8p8\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.912720 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.912669119 podStartE2EDuration="1.912669119s" podCreationTimestamp="2026-01-30 05:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:28:58.90521395 +0000 UTC m=+1274.275124227" watchObservedRunningTime="2026-01-30 05:28:58.912669119 +0000 UTC m=+1274.282579406" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.930175 4931 scope.go:117] "RemoveContainer" containerID="fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" Jan 30 05:28:58 crc kubenswrapper[4931]: E0130 05:28:58.930676 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd\": container with ID starting with fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd not found: ID does not exist" containerID="fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.930701 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd"} err="failed to get container status \"fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd\": rpc error: code = NotFound desc = could not find container \"fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd\": container with ID starting with fb13c2d310bf911723630c03371a3530f6d596636164c4a527400ca34ebbd7fd not found: ID does not exist" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.936531 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.950002 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.957411 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:58 crc kubenswrapper[4931]: E0130 05:28:58.957994 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4be9b51-9e05-4080-9aac-1e7a68785e90" containerName="nova-scheduler-scheduler" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.958018 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4be9b51-9e05-4080-9aac-1e7a68785e90" containerName="nova-scheduler-scheduler" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.958293 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4be9b51-9e05-4080-9aac-1e7a68785e90" containerName="nova-scheduler-scheduler" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.959097 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.961359 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 05:28:58 crc kubenswrapper[4931]: I0130 05:28:58.969575 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.113611 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5r9p\" (UniqueName: \"kubernetes.io/projected/1acfa9c2-a802-404e-976b-93d9f99e1fbb-kube-api-access-z5r9p\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.113677 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-config-data\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.114057 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.216549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5r9p\" (UniqueName: \"kubernetes.io/projected/1acfa9c2-a802-404e-976b-93d9f99e1fbb-kube-api-access-z5r9p\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.216617 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-config-data\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.216717 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.222943 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.222991 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-config-data\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.240547 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5r9p\" (UniqueName: \"kubernetes.io/projected/1acfa9c2-a802-404e-976b-93d9f99e1fbb-kube-api-access-z5r9p\") pod \"nova-scheduler-0\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.274729 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.438119 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4be9b51-9e05-4080-9aac-1e7a68785e90" path="/var/lib/kubelet/pods/f4be9b51-9e05-4080-9aac-1e7a68785e90/volumes" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.758387 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.828211 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-public-tls-certs\") pod \"84172ea2-ea94-454e-a247-3388dbd3f559\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.828260 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vrpv\" (UniqueName: \"kubernetes.io/projected/84172ea2-ea94-454e-a247-3388dbd3f559-kube-api-access-2vrpv\") pod \"84172ea2-ea94-454e-a247-3388dbd3f559\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.828344 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84172ea2-ea94-454e-a247-3388dbd3f559-logs\") pod \"84172ea2-ea94-454e-a247-3388dbd3f559\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.828374 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-internal-tls-certs\") pod \"84172ea2-ea94-454e-a247-3388dbd3f559\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.828434 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-combined-ca-bundle\") pod \"84172ea2-ea94-454e-a247-3388dbd3f559\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.828543 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-config-data\") pod \"84172ea2-ea94-454e-a247-3388dbd3f559\" (UID: \"84172ea2-ea94-454e-a247-3388dbd3f559\") " Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.829213 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84172ea2-ea94-454e-a247-3388dbd3f559-logs" (OuterVolumeSpecName: "logs") pod "84172ea2-ea94-454e-a247-3388dbd3f559" (UID: "84172ea2-ea94-454e-a247-3388dbd3f559"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.838508 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84172ea2-ea94-454e-a247-3388dbd3f559-kube-api-access-2vrpv" (OuterVolumeSpecName: "kube-api-access-2vrpv") pod "84172ea2-ea94-454e-a247-3388dbd3f559" (UID: "84172ea2-ea94-454e-a247-3388dbd3f559"). InnerVolumeSpecName "kube-api-access-2vrpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.878054 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-config-data" (OuterVolumeSpecName: "config-data") pod "84172ea2-ea94-454e-a247-3388dbd3f559" (UID: "84172ea2-ea94-454e-a247-3388dbd3f559"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.885010 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84172ea2-ea94-454e-a247-3388dbd3f559" (UID: "84172ea2-ea94-454e-a247-3388dbd3f559"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.909442 4931 generic.go:334] "Generic (PLEG): container finished" podID="84172ea2-ea94-454e-a247-3388dbd3f559" containerID="32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61" exitCode=0 Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.909509 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84172ea2-ea94-454e-a247-3388dbd3f559","Type":"ContainerDied","Data":"32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61"} Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.909703 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84172ea2-ea94-454e-a247-3388dbd3f559","Type":"ContainerDied","Data":"5def1c57849cb63c69d783f2aabd654a2c46a9078012cc3f016fb655404a7738"} Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.909731 4931 scope.go:117] "RemoveContainer" containerID="32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.909972 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.910745 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "84172ea2-ea94-454e-a247-3388dbd3f559" (UID: "84172ea2-ea94-454e-a247-3388dbd3f559"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.927198 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.936571 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.936635 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vrpv\" (UniqueName: \"kubernetes.io/projected/84172ea2-ea94-454e-a247-3388dbd3f559-kube-api-access-2vrpv\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.936647 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84172ea2-ea94-454e-a247-3388dbd3f559-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.936660 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.936669 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.951656 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "84172ea2-ea94-454e-a247-3388dbd3f559" (UID: "84172ea2-ea94-454e-a247-3388dbd3f559"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:28:59 crc kubenswrapper[4931]: I0130 05:28:59.996766 4931 scope.go:117] "RemoveContainer" containerID="d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.022613 4931 scope.go:117] "RemoveContainer" containerID="32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61" Jan 30 05:29:00 crc kubenswrapper[4931]: E0130 05:29:00.023077 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61\": container with ID starting with 32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61 not found: ID does not exist" containerID="32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.023126 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61"} err="failed to get container status \"32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61\": rpc error: code = NotFound desc = could not find container \"32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61\": container with ID starting with 32f9b0c81902795a5f01b845ba8964a6d9eb951a58b9171a271cd3f750b03f61 not found: ID does not exist" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.023145 4931 scope.go:117] "RemoveContainer" containerID="d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8" Jan 30 05:29:00 crc kubenswrapper[4931]: E0130 05:29:00.023511 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8\": container with ID starting with d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8 not found: ID does not exist" containerID="d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.023530 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8"} err="failed to get container status \"d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8\": rpc error: code = NotFound desc = could not find container \"d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8\": container with ID starting with d14ecfe1fd471fb982dc61b1f42b768a824415136baffeefdc2bb1ee2a9426e8 not found: ID does not exist" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.038143 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84172ea2-ea94-454e-a247-3388dbd3f559-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.263905 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.285313 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.305653 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:00 crc kubenswrapper[4931]: E0130 05:29:00.306319 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-api" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.306352 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-api" Jan 30 05:29:00 crc kubenswrapper[4931]: E0130 05:29:00.306401 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-log" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.306417 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-log" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.306777 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-log" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.306807 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" containerName="nova-api-api" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.308532 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.316561 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.316869 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.317071 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.329056 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.445514 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dkxm\" (UniqueName: \"kubernetes.io/projected/406c25f3-c398-4ace-ba4b-1d9b48b289a2-kube-api-access-8dkxm\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.445651 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.445704 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-public-tls-certs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.445777 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c25f3-c398-4ace-ba4b-1d9b48b289a2-logs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.445835 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.445962 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-config-data\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.547117 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-config-data\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.547264 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dkxm\" (UniqueName: \"kubernetes.io/projected/406c25f3-c398-4ace-ba4b-1d9b48b289a2-kube-api-access-8dkxm\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.547307 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.547323 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-public-tls-certs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.547353 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c25f3-c398-4ace-ba4b-1d9b48b289a2-logs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.547379 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.547994 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c25f3-c398-4ace-ba4b-1d9b48b289a2-logs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.554493 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-config-data\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.554856 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-public-tls-certs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.560476 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.564120 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.583305 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dkxm\" (UniqueName: \"kubernetes.io/projected/406c25f3-c398-4ace-ba4b-1d9b48b289a2-kube-api-access-8dkxm\") pod \"nova-api-0\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.681301 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.927772 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1acfa9c2-a802-404e-976b-93d9f99e1fbb","Type":"ContainerStarted","Data":"cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52"} Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.927841 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1acfa9c2-a802-404e-976b-93d9f99e1fbb","Type":"ContainerStarted","Data":"24154bd6bbe2da670ea864204ee97206379a1b7b92792be6f14d33757f908143"} Jan 30 05:29:00 crc kubenswrapper[4931]: I0130 05:29:00.972372 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.972342271 podStartE2EDuration="2.972342271s" podCreationTimestamp="2026-01-30 05:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:29:00.948647016 +0000 UTC m=+1276.318557323" watchObservedRunningTime="2026-01-30 05:29:00.972342271 +0000 UTC m=+1276.342252548" Jan 30 05:29:01 crc kubenswrapper[4931]: I0130 05:29:01.069825 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:01 crc kubenswrapper[4931]: W0130 05:29:01.073407 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406c25f3_c398_4ace_ba4b_1d9b48b289a2.slice/crio-76659dd7c99e1a96db2d103669e2a9a9122278f8d46deab3719b00840b99f159 WatchSource:0}: Error finding container 76659dd7c99e1a96db2d103669e2a9a9122278f8d46deab3719b00840b99f159: Status 404 returned error can't find the container with id 76659dd7c99e1a96db2d103669e2a9a9122278f8d46deab3719b00840b99f159 Jan 30 05:29:01 crc kubenswrapper[4931]: I0130 05:29:01.439574 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84172ea2-ea94-454e-a247-3388dbd3f559" path="/var/lib/kubelet/pods/84172ea2-ea94-454e-a247-3388dbd3f559/volumes" Jan 30 05:29:01 crc kubenswrapper[4931]: I0130 05:29:01.947816 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"406c25f3-c398-4ace-ba4b-1d9b48b289a2","Type":"ContainerStarted","Data":"d70e15b0f074e59f1a9f39048c2cf45a62e81400091cb70df139030d514fe003"} Jan 30 05:29:01 crc kubenswrapper[4931]: I0130 05:29:01.947866 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"406c25f3-c398-4ace-ba4b-1d9b48b289a2","Type":"ContainerStarted","Data":"e5dadd497214a5d2efc5b8027947f3661f7f73599b0778570358c42329955e8d"} Jan 30 05:29:01 crc kubenswrapper[4931]: I0130 05:29:01.947877 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"406c25f3-c398-4ace-ba4b-1d9b48b289a2","Type":"ContainerStarted","Data":"76659dd7c99e1a96db2d103669e2a9a9122278f8d46deab3719b00840b99f159"} Jan 30 05:29:01 crc kubenswrapper[4931]: I0130 05:29:01.974670 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.974650199 podStartE2EDuration="1.974650199s" podCreationTimestamp="2026-01-30 05:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:29:01.971558773 +0000 UTC m=+1277.341469050" watchObservedRunningTime="2026-01-30 05:29:01.974650199 +0000 UTC m=+1277.344560456" Jan 30 05:29:02 crc kubenswrapper[4931]: I0130 05:29:02.594804 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 05:29:02 crc kubenswrapper[4931]: I0130 05:29:02.594862 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 05:29:04 crc kubenswrapper[4931]: I0130 05:29:04.275286 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 05:29:07 crc kubenswrapper[4931]: I0130 05:29:07.595284 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 05:29:07 crc kubenswrapper[4931]: I0130 05:29:07.595351 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 05:29:08 crc kubenswrapper[4931]: I0130 05:29:08.608672 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 05:29:08 crc kubenswrapper[4931]: I0130 05:29:08.608804 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 05:29:09 crc kubenswrapper[4931]: I0130 05:29:09.275205 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 05:29:09 crc kubenswrapper[4931]: I0130 05:29:09.318584 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 05:29:10 crc kubenswrapper[4931]: I0130 05:29:10.070095 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 05:29:10 crc kubenswrapper[4931]: I0130 05:29:10.683591 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:29:10 crc kubenswrapper[4931]: I0130 05:29:10.683644 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 05:29:11 crc kubenswrapper[4931]: I0130 05:29:11.698611 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 05:29:11 crc kubenswrapper[4931]: I0130 05:29:11.698681 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 05:29:17 crc kubenswrapper[4931]: I0130 05:29:17.603718 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 05:29:17 crc kubenswrapper[4931]: I0130 05:29:17.606631 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 05:29:17 crc kubenswrapper[4931]: I0130 05:29:17.611780 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 05:29:18 crc kubenswrapper[4931]: I0130 05:29:18.133582 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 05:29:18 crc kubenswrapper[4931]: I0130 05:29:18.229525 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 05:29:20 crc kubenswrapper[4931]: I0130 05:29:20.694316 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 05:29:20 crc kubenswrapper[4931]: I0130 05:29:20.695082 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 05:29:20 crc kubenswrapper[4931]: I0130 05:29:20.699626 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 05:29:20 crc kubenswrapper[4931]: I0130 05:29:20.703885 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 05:29:21 crc kubenswrapper[4931]: I0130 05:29:21.165302 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 05:29:21 crc kubenswrapper[4931]: I0130 05:29:21.177922 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 05:29:27 crc kubenswrapper[4931]: I0130 05:29:27.363554 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:29:27 crc kubenswrapper[4931]: I0130 05:29:27.364384 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.181112 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7789bbd757-45b5w"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.183115 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.203510 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.217187 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-867d8cd54-77bnr"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.220710 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.245895 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7789bbd757-45b5w"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.277352 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.277552 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="cinder-scheduler" containerID="cri-o://9398f7e713fb447b3a151d286be2d2910e4d8535fd421e906c46b8cc2c9a4728" gracePeriod=30 Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.277627 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="probe" containerID="cri-o://571155fa2c4a4cc11bc78f96b7b5b636cdda183726d54338d2bf0cc02d77f003" gracePeriod=30 Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.320490 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.320884 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="6b263e8e-7618-4044-bed1-b35174d6a8f4" containerName="openstackclient" containerID="cri-o://998f26954c016e9a4be4fed72f68f879a1f7793c171311545d8f4958871325fd" gracePeriod=2 Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325451 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-combined-ca-bundle\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325534 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325580 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwf5\" (UniqueName: \"kubernetes.io/projected/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-kube-api-access-jpwf5\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325609 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ac55021-a07e-443f-9ee9-e7516556b975-logs\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325625 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-logs\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325671 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mc9v\" (UniqueName: \"kubernetes.io/projected/9ac55021-a07e-443f-9ee9-e7516556b975-kube-api-access-7mc9v\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325696 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data-custom\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325736 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325788 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data-custom\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.325803 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-combined-ca-bundle\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.330247 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.330335 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data podName:fc3f4796-66b1-452b-afca-5e62cbf2a53b nodeName:}" failed. No retries permitted until 2026-01-30 05:29:41.830320777 +0000 UTC m=+1317.200231034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data") pod "rabbitmq-server-0" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b") : configmap "rabbitmq-config-data" not found Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.359512 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-867d8cd54-77bnr"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.382479 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.400410 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6xxt5"] Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.400824 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b263e8e-7618-4044-bed1-b35174d6a8f4" containerName="openstackclient" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.400840 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b263e8e-7618-4044-bed1-b35174d6a8f4" containerName="openstackclient" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.401036 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b263e8e-7618-4044-bed1-b35174d6a8f4" containerName="openstackclient" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.401611 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.411613 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.427667 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6xxt5"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432713 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data-custom\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432755 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-combined-ca-bundle\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-combined-ca-bundle\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432865 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432896 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwf5\" (UniqueName: \"kubernetes.io/projected/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-kube-api-access-jpwf5\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432922 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ac55021-a07e-443f-9ee9-e7516556b975-logs\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432939 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-logs\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432973 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mc9v\" (UniqueName: \"kubernetes.io/projected/9ac55021-a07e-443f-9ee9-e7516556b975-kube-api-access-7mc9v\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.432994 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data-custom\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.433018 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.463579 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-logs\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.465289 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-combined-ca-bundle\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.472853 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ac55021-a07e-443f-9ee9-e7516556b975-logs\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.473182 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data-custom\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.479948 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-combined-ca-bundle\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.489876 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.490353 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data-custom\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.531071 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mc9v\" (UniqueName: \"kubernetes.io/projected/9ac55021-a07e-443f-9ee9-e7516556b975-kube-api-access-7mc9v\") pod \"barbican-keystone-listener-867d8cd54-77bnr\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.538744 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts\") pod \"root-account-create-update-6xxt5\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.538854 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7wh\" (UniqueName: \"kubernetes.io/projected/623f3c8f-d741-4ba4-baca-905a13102f38-kube-api-access-pq7wh\") pod \"root-account-create-update-6xxt5\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.542070 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.544529 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwf5\" (UniqueName: \"kubernetes.io/projected/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-kube-api-access-jpwf5\") pod \"barbican-worker-7789bbd757-45b5w\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.548093 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.636601 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-cb5c-account-create-update-n52qj"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.647943 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts\") pod \"root-account-create-update-6xxt5\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.648032 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7wh\" (UniqueName: \"kubernetes.io/projected/623f3c8f-d741-4ba4-baca-905a13102f38-kube-api-access-pq7wh\") pod \"root-account-create-update-6xxt5\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.649289 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts\") pod \"root-account-create-update-6xxt5\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.650591 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.663566 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-cb5c-account-create-update-n52qj"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.673123 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.687970 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.688228 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api-log" containerID="cri-o://c948d726013eb4e8273ef998118172023cae6536ca99db77a7f1ebd4884def12" gracePeriod=30 Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.688342 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api" containerID="cri-o://2823dcc09d156bc746ffbc3ab196c3d6e136f453bf377837d7cce10861a168f4" gracePeriod=30 Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.712692 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7wh\" (UniqueName: \"kubernetes.io/projected/623f3c8f-d741-4ba4-baca-905a13102f38-kube-api-access-pq7wh\") pod \"root-account-create-update-6xxt5\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.713663 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-cbdc6b6c8-m9v7c"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.715128 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.728102 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.739977 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p975f"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.751411 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ad7de9-e01d-414c-8a4d-9073ad986186-operator-scripts\") pod \"barbican-cb5c-account-create-update-n52qj\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.751700 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9gv\" (UniqueName: \"kubernetes.io/projected/46ad7de9-e01d-414c-8a4d-9073ad986186-kube-api-access-5c9gv\") pod \"barbican-cb5c-account-create-update-n52qj\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.781821 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p975f"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.805733 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-cbdc6b6c8-m9v7c"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.806174 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.829529 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-cb5c-account-create-update-7n4vq"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.842924 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-cb5c-account-create-update-7n4vq"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855566 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9tkc\" (UniqueName: \"kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855654 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-public-tls-certs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855701 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-combined-ca-bundle\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855724 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-internal-tls-certs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855750 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9gv\" (UniqueName: \"kubernetes.io/projected/46ad7de9-e01d-414c-8a4d-9073ad986186-kube-api-access-5c9gv\") pod \"barbican-cb5c-account-create-update-n52qj\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855786 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855844 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98fff7bd-db4c-462f-8f2c-34733f4e81ad-logs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855882 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data-custom\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.855908 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ad7de9-e01d-414c-8a4d-9073ad986186-operator-scripts\") pod \"barbican-cb5c-account-create-update-n52qj\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.856542 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ad7de9-e01d-414c-8a4d-9073ad986186-operator-scripts\") pod \"barbican-cb5c-account-create-update-n52qj\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.856630 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.856669 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data podName:fc3f4796-66b1-452b-afca-5e62cbf2a53b nodeName:}" failed. No retries permitted until 2026-01-30 05:29:42.856656704 +0000 UTC m=+1318.226566961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data") pod "rabbitmq-server-0" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b") : configmap "rabbitmq-config-data" not found Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.877512 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.894077 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9gv\" (UniqueName: \"kubernetes.io/projected/46ad7de9-e01d-414c-8a4d-9073ad986186-kube-api-access-5c9gv\") pod \"barbican-cb5c-account-create-update-n52qj\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.951907 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8ee9-account-create-update-c7rsn"] Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.954279 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.969053 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-public-tls-certs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.969111 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-combined-ca-bundle\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.969131 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-internal-tls-certs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.969192 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.969281 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98fff7bd-db4c-462f-8f2c-34733f4e81ad-logs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.969330 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data-custom\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.969368 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9tkc\" (UniqueName: \"kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.979580 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.982718 4931 projected.go:194] Error preparing data for projected volume kube-api-access-t9tkc for pod openstack/barbican-api-cbdc6b6c8-m9v7c: failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.982797 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:42.482778751 +0000 UTC m=+1317.852688998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t9tkc" (UniqueName: "kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:41 crc kubenswrapper[4931]: I0130 05:29:41.983303 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8ee9-account-create-update-c7rsn"] Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.984804 4931 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.984845 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:42.484832129 +0000 UTC m=+1317.854742386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : secret "barbican-config-data" not found Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.984878 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:41 crc kubenswrapper[4931]: E0130 05:29:41.985035 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data podName:081e3873-ea99-4486-925f-784a98e49405 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:42.484890541 +0000 UTC m=+1317.854935392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data") pod "rabbitmq-cell1-server-0" (UID: "081e3873-ea99-4486-925f-784a98e49405") : configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:41.992665 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-internal-tls-certs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.014902 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98fff7bd-db4c-462f-8f2c-34733f4e81ad-logs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.015601 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.020496 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8ee9-account-create-update-sdn4j"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.026387 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-combined-ca-bundle\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.028390 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-public-tls-certs\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.035476 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kbkmb"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.037303 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data-custom\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.071552 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4flg\" (UniqueName: \"kubernetes.io/projected/5d4d7097-4e75-41cb-b451-6feb8e2184b9-kube-api-access-q4flg\") pod \"cinder-8ee9-account-create-update-c7rsn\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.071627 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4d7097-4e75-41cb-b451-6feb8e2184b9-operator-scripts\") pod \"cinder-8ee9-account-create-update-c7rsn\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.072521 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8ee9-account-create-update-sdn4j"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.096089 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kbkmb"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.141084 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.141331 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="ovn-northd" containerID="cri-o://cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949" gracePeriod=30 Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.141789 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="openstack-network-exporter" containerID="cri-o://dea51d6ee685a2470eaa0864347990ea744cedc85d00846568c67d56ba221ee1" gracePeriod=30 Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.156096 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-df05-account-create-update-xmzpk"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.173085 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4flg\" (UniqueName: \"kubernetes.io/projected/5d4d7097-4e75-41cb-b451-6feb8e2184b9-kube-api-access-q4flg\") pod \"cinder-8ee9-account-create-update-c7rsn\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.173411 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4d7097-4e75-41cb-b451-6feb8e2184b9-operator-scripts\") pod \"cinder-8ee9-account-create-update-c7rsn\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.174204 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4d7097-4e75-41cb-b451-6feb8e2184b9-operator-scripts\") pod \"cinder-8ee9-account-create-update-c7rsn\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.174481 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-df05-account-create-update-nrbm4"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.175535 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.189929 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.190304 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-df05-account-create-update-xmzpk"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.210567 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-df05-account-create-update-nrbm4"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.234290 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4flg\" (UniqueName: \"kubernetes.io/projected/5d4d7097-4e75-41cb-b451-6feb8e2184b9-kube-api-access-q4flg\") pod \"cinder-8ee9-account-create-update-c7rsn\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.258026 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0120-account-create-update-cj262"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.259141 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.275053 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-ntbbc"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.276449 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.282654 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef60747-e73b-451c-b8e1-6abd596d31bb-operator-scripts\") pod \"glance-df05-account-create-update-nrbm4\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.282901 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjv6q\" (UniqueName: \"kubernetes.io/projected/7ef60747-e73b-451c-b8e1-6abd596d31bb-kube-api-access-xjv6q\") pod \"glance-df05-account-create-update-nrbm4\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.285155 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.290221 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.308471 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0120-account-create-update-cj262"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.334663 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-ntbbc"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.337869 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.384338 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjv6q\" (UniqueName: \"kubernetes.io/projected/7ef60747-e73b-451c-b8e1-6abd596d31bb-kube-api-access-xjv6q\") pod \"glance-df05-account-create-update-nrbm4\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.384402 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6vk\" (UniqueName: \"kubernetes.io/projected/f493e630-c604-4fd1-9fa6-f26d6d1a179a-kube-api-access-zc6vk\") pod \"nova-cell0-10f6-account-create-update-ntbbc\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.398497 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f493e630-c604-4fd1-9fa6-f26d6d1a179a-operator-scripts\") pod \"nova-cell0-10f6-account-create-update-ntbbc\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.398607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6j5x\" (UniqueName: \"kubernetes.io/projected/d13136a7-4633-4386-822d-ceb2cb3320b8-kube-api-access-l6j5x\") pod \"nova-api-0120-account-create-update-cj262\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.398738 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef60747-e73b-451c-b8e1-6abd596d31bb-operator-scripts\") pod \"glance-df05-account-create-update-nrbm4\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.398826 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13136a7-4633-4386-822d-ceb2cb3320b8-operator-scripts\") pod \"nova-api-0120-account-create-update-cj262\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.399765 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef60747-e73b-451c-b8e1-6abd596d31bb-operator-scripts\") pod \"glance-df05-account-create-update-nrbm4\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.436074 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjv6q\" (UniqueName: \"kubernetes.io/projected/7ef60747-e73b-451c-b8e1-6abd596d31bb-kube-api-access-xjv6q\") pod \"glance-df05-account-create-update-nrbm4\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.443493 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-fkqxj"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.460922 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-fkqxj"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.523543 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9tkc\" (UniqueName: \"kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.523655 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13136a7-4633-4386-822d-ceb2cb3320b8-operator-scripts\") pod \"nova-api-0120-account-create-update-cj262\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.523706 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.523843 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6vk\" (UniqueName: \"kubernetes.io/projected/f493e630-c604-4fd1-9fa6-f26d6d1a179a-kube-api-access-zc6vk\") pod \"nova-cell0-10f6-account-create-update-ntbbc\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.523863 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f493e630-c604-4fd1-9fa6-f26d6d1a179a-operator-scripts\") pod \"nova-cell0-10f6-account-create-update-ntbbc\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.523898 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6j5x\" (UniqueName: \"kubernetes.io/projected/d13136a7-4633-4386-822d-ceb2cb3320b8-kube-api-access-l6j5x\") pod \"nova-api-0120-account-create-update-cj262\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.525206 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.525302 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data podName:081e3873-ea99-4486-925f-784a98e49405 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:43.525270955 +0000 UTC m=+1318.895181212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data") pod "rabbitmq-cell1-server-0" (UID: "081e3873-ea99-4486-925f-784a98e49405") : configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.526466 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13136a7-4633-4386-822d-ceb2cb3320b8-operator-scripts\") pod \"nova-api-0120-account-create-update-cj262\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.526546 4931 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.526580 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:43.526569392 +0000 UTC m=+1318.896479649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : secret "barbican-config-data" not found Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.527685 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f493e630-c604-4fd1-9fa6-f26d6d1a179a-operator-scripts\") pod \"nova-cell0-10f6-account-create-update-ntbbc\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.536464 4931 generic.go:334] "Generic (PLEG): container finished" podID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerID="dea51d6ee685a2470eaa0864347990ea744cedc85d00846568c67d56ba221ee1" exitCode=2 Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.536566 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91","Type":"ContainerDied","Data":"dea51d6ee685a2470eaa0864347990ea744cedc85d00846568c67d56ba221ee1"} Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.547321 4931 projected.go:194] Error preparing data for projected volume kube-api-access-t9tkc for pod openstack/barbican-api-cbdc6b6c8-m9v7c: failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.549913 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.557829 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:43.557764694 +0000 UTC m=+1318.927674951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t9tkc" (UniqueName: "kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.563055 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-326d-account-create-update-b25zb"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.565978 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.574212 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.623589 4931 generic.go:334] "Generic (PLEG): container finished" podID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerID="c948d726013eb4e8273ef998118172023cae6536ca99db77a7f1ebd4884def12" exitCode=143 Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.623975 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e","Type":"ContainerDied","Data":"c948d726013eb4e8273ef998118172023cae6536ca99db77a7f1ebd4884def12"} Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.689323 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6vk\" (UniqueName: \"kubernetes.io/projected/f493e630-c604-4fd1-9fa6-f26d6d1a179a-kube-api-access-zc6vk\") pod \"nova-cell0-10f6-account-create-update-ntbbc\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.695394 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6j5x\" (UniqueName: \"kubernetes.io/projected/d13136a7-4633-4386-822d-ceb2cb3320b8-kube-api-access-l6j5x\") pod \"nova-api-0120-account-create-update-cj262\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.699105 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.757945 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-b25zb"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.758615 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-operator-scripts\") pod \"nova-cell1-326d-account-create-update-b25zb\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.758739 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swhwl\" (UniqueName: \"kubernetes.io/projected/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-kube-api-access-swhwl\") pod \"nova-cell1-326d-account-create-update-b25zb\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.845404 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.846057 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="openstack-network-exporter" containerID="cri-o://4583c4efc80289000d7023b793d84dc55442d51907c690cc558e927738cb2e88" gracePeriod=300 Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.861703 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-operator-scripts\") pod \"nova-cell1-326d-account-create-update-b25zb\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.861779 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swhwl\" (UniqueName: \"kubernetes.io/projected/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-kube-api-access-swhwl\") pod \"nova-cell1-326d-account-create-update-b25zb\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.861939 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 05:29:42 crc kubenswrapper[4931]: E0130 05:29:42.861985 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data podName:fc3f4796-66b1-452b-afca-5e62cbf2a53b nodeName:}" failed. No retries permitted until 2026-01-30 05:29:44.861970479 +0000 UTC m=+1320.231880736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data") pod "rabbitmq-server-0" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b") : configmap "rabbitmq-config-data" not found Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.874902 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-operator-scripts\") pod \"nova-cell1-326d-account-create-update-b25zb\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.902156 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swhwl\" (UniqueName: \"kubernetes.io/projected/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-kube-api-access-swhwl\") pod \"nova-cell1-326d-account-create-update-b25zb\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.929767 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rpr97"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.940085 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0120-account-create-update-dptmf"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.940751 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.956508 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75d9f6f6ff-kmswn"] Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.956843 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75d9f6f6ff-kmswn" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-api" containerID="cri-o://59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22" gracePeriod=30 Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.957248 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-75d9f6f6ff-kmswn" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-httpd" containerID="cri-o://e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e" gracePeriod=30 Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.975865 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:42 crc kubenswrapper[4931]: I0130 05:29:42.986312 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0120-account-create-update-dptmf"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.003677 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="ovsdbserver-sb" containerID="cri-o://8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c" gracePeriod=300 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.030340 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rpr97"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.047311 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.047689 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="openstack-network-exporter" containerID="cri-o://36f1f59d90f7e1367de837bd2375e2c11d0df21e5687e4d77b474faff3e8df0b" gracePeriod=300 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.073784 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-vfdzl"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.089467 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-vfdzl"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.124205 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-dvktv"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.124545 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-dvktv" podUID="4ba289fc-17e9-45e9-ac24-434d69045d97" containerName="openstack-network-exporter" containerID="cri-o://82c70d68aab65fc3db72ee184a048732b17b72a09f49232810d0c430a261f1e7" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.140706 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ggjtl"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.151410 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-thxc2"] Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.158980 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c is running failed: container process not found" containerID="8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.161013 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c is running failed: container process not found" containerID="8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.166889 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c is running failed: container process not found" containerID="8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.166962 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="ovsdbserver-sb" Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.199275 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf28f211b_be26_4f15_92a1_36b91cb53bbb.slice/crio-4583c4efc80289000d7023b793d84dc55442d51907c690cc558e927738cb2e88.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf28f211b_be26_4f15_92a1_36b91cb53bbb.slice/crio-conmon-8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c.scope\": RecentStats: unable to find data in memory cache]" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.219600 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wxb94"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.256813 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wxb94"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.281575 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cwv94"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.295980 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-rvcsw"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.317516 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cwv94"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.327235 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ldr24"] Jan 30 05:29:43 crc kubenswrapper[4931]: W0130 05:29:43.339129 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod623f3c8f_d741_4ba4_baca_905a13102f38.slice/crio-b0a95f43d76f1ec6c04259c654f8d4d57485ed8bd7f8f8efedcad5a660e7a5b0 WatchSource:0}: Error finding container b0a95f43d76f1ec6c04259c654f8d4d57485ed8bd7f8f8efedcad5a660e7a5b0: Status 404 returned error can't find the container with id b0a95f43d76f1ec6c04259c654f8d4d57485ed8bd7f8f8efedcad5a660e7a5b0 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.390080 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-rvcsw"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.418365 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ldr24"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.451518 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="ovsdbserver-nb" containerID="cri-o://ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21" gracePeriod=300 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.453844 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053ccacf-d473-49f5-89e5-545a753e5e03" path="/var/lib/kubelet/pods/053ccacf-d473-49f5-89e5-545a753e5e03/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.454799 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c65b18-0526-4eec-a608-20478c5eb008" path="/var/lib/kubelet/pods/08c65b18-0526-4eec-a608-20478c5eb008/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.455481 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff3c7ac-e403-4826-bf45-a6bed05570b7" path="/var/lib/kubelet/pods/1ff3c7ac-e403-4826-bf45-a6bed05570b7/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.456004 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719" path="/var/lib/kubelet/pods/2b999b60-d1ed-4c1a-8ca2-da7aa9dbb719/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.457024 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c29ace9-3be7-44a1-b8eb-d356a4721152" path="/var/lib/kubelet/pods/2c29ace9-3be7-44a1-b8eb-d356a4721152/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.457558 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b78d3f2-c575-4b24-bbb8-c956f61a575d" path="/var/lib/kubelet/pods/3b78d3f2-c575-4b24-bbb8-c956f61a575d/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.458110 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438fbbb5-a318-4714-9dac-e3f0fc3f63d3" path="/var/lib/kubelet/pods/438fbbb5-a318-4714-9dac-e3f0fc3f63d3/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.476560 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd6723b-baf8-47eb-a774-68a5dfbcc4a6" path="/var/lib/kubelet/pods/6dd6723b-baf8-47eb-a774-68a5dfbcc4a6/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.483217 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1" path="/var/lib/kubelet/pods/7d8ef5ef-69a5-48f4-87d7-dbce2a99f8a1/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.487746 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1b1f6c-2147-48f7-87ea-e64672036831" path="/var/lib/kubelet/pods/bf1b1f6c-2147-48f7-87ea-e64672036831/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.490040 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98e6af1-4571-4da7-a6e8-0b54505af47c" path="/var/lib/kubelet/pods/d98e6af1-4571-4da7-a6e8-0b54505af47c/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.493264 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd612f9b-4de8-48e4-a945-c97e5c495292" path="/var/lib/kubelet/pods/dd612f9b-4de8-48e4-a945-c97e5c495292/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.499910 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ddcee7-a757-43b5-bf76-552cbd8d9078" path="/var/lib/kubelet/pods/f3ddcee7-a757-43b5-bf76-552cbd8d9078/volumes" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.502094 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-ctzjd"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.502948 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.523306 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" podUID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerName="dnsmasq-dns" containerID="cri-o://e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e" gracePeriod=10 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.606073 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9tkc\" (UniqueName: \"kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.611621 4931 projected.go:194] Error preparing data for projected volume kube-api-access-t9tkc for pod openstack/barbican-api-cbdc6b6c8-m9v7c: failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.632409 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.634950 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.635021 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data podName:081e3873-ea99-4486-925f-784a98e49405 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:45.635002744 +0000 UTC m=+1321.004913001 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data") pod "rabbitmq-cell1-server-0" (UID: "081e3873-ea99-4486-925f-784a98e49405") : configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.636554 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637175 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-server" containerID="cri-o://e78b14aff0684ab7de691a18e86ce169b9e67b8f02342d8eba7927de4cb39ec6" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637579 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="swift-recon-cron" containerID="cri-o://7960131bc61ab6450751b905e24e2ccae8d9fe2d400984f5011874b3859c6719" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637625 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="rsync" containerID="cri-o://fa1a93081b269f4ada317cd8046ebb7f1a7c1edf1f6e97c13ed393eacd7e1973" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637668 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-expirer" containerID="cri-o://cf71a5f4e5a8611b3edb8a350ba0e2eedbd78c0fb76770c94841152df4a3ab69" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637706 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-updater" containerID="cri-o://577bb47efe5f44d38e3c888fdf879028a229599eb11554a344d6a077afa58802" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637749 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-auditor" containerID="cri-o://2aa8176e0269c78ed82e92b582f8a0a44311ad87daa1079e948a15315c72207f" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637793 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-replicator" containerID="cri-o://cc9cceab2cf461936102038fbf1707f4be2e195decb5808193c7f51c4adb08d3" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637832 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-server" containerID="cri-o://b4e57fcd32132c1d41dc41783803f5dfdbd53a0317437ca189732c5c62a33471" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637872 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-updater" containerID="cri-o://6088dfc85b1e09a936dd16faccae994e80e2ccb29840c2c09302842b83328fc2" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637915 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-auditor" containerID="cri-o://072ac216076cb0fec2ec21789975f4b6fdf297b846d9774f980cb280a52a2718" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637954 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-replicator" containerID="cri-o://840bb675eae49d372214aa49017516eb7cc03feb5f0cebb6fb56a2dd4d0837b9" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.637994 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-server" containerID="cri-o://01987d0b4f025a347544f55c09ea6cf4f3249f746d37222f35ee196eb4525b63" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.638052 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-reaper" containerID="cri-o://9eac75907fb7af02eb9159bedaf64c4ca7dee04ca441b549c3a48132b186515f" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.638092 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-auditor" containerID="cri-o://64945c3ef451f83b413801249e5cc8cedcb622a5a129ee80defec98e393eed29" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.638168 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-replicator" containerID="cri-o://de7939eb8c76f478b34a03e7dd08f4a308dc8bbb63a287db0a5b3eec8794cc7c" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.638286 4931 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.638328 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:45.638317297 +0000 UTC m=+1321.008227554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : secret "barbican-config-data" not found Jan 30 05:29:43 crc kubenswrapper[4931]: E0130 05:29:43.642654 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:45.642620279 +0000 UTC m=+1321.012530536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t9tkc" (UniqueName: "kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.684406 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-798b7dc5fb-xl2zq"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.684730 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-798b7dc5fb-xl2zq" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-log" containerID="cri-o://1d733edb3ceaca43f34355e23bbaaced9e55a731057ead7b89c96398337d6e11" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.684917 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-798b7dc5fb-xl2zq" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-api" containerID="cri-o://e8519c60ec437acc9c9b5934ab3951ad5ad349186eda26fc85c2bae9b3010203" gracePeriod=30 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.720844 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-bcdcb"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.769594 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerName="rabbitmq" containerID="cri-o://1c82fc5914a60be58942659c6c59b346ba961ba1c401d7f0c82d22447fc0b135" gracePeriod=604800 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.770513 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-bcdcb"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.779657 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xxt5" event={"ID":"623f3c8f-d741-4ba4-baca-905a13102f38","Type":"ContainerStarted","Data":"b0a95f43d76f1ec6c04259c654f8d4d57485ed8bd7f8f8efedcad5a660e7a5b0"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.791007 4931 generic.go:334] "Generic (PLEG): container finished" podID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerID="571155fa2c4a4cc11bc78f96b7b5b636cdda183726d54338d2bf0cc02d77f003" exitCode=0 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.791157 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd","Type":"ContainerDied","Data":"571155fa2c4a4cc11bc78f96b7b5b636cdda183726d54338d2bf0cc02d77f003"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.794448 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-557f-account-create-update-6vjq5"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.817884 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-557f-account-create-update-6vjq5"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.825139 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_49a63fb4-24bc-4834-b6e7-937688c5de09/ovsdbserver-nb/0.log" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.825199 4931 generic.go:334] "Generic (PLEG): container finished" podID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerID="36f1f59d90f7e1367de837bd2375e2c11d0df21e5687e4d77b474faff3e8df0b" exitCode=2 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.825221 4931 generic.go:334] "Generic (PLEG): container finished" podID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerID="ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21" exitCode=143 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.825289 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"49a63fb4-24bc-4834-b6e7-937688c5de09","Type":"ContainerDied","Data":"36f1f59d90f7e1367de837bd2375e2c11d0df21e5687e4d77b474faff3e8df0b"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.825315 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"49a63fb4-24bc-4834-b6e7-937688c5de09","Type":"ContainerDied","Data":"ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.850793 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" event={"ID":"9ac55021-a07e-443f-9ee9-e7516556b975","Type":"ContainerStarted","Data":"ddc1c9e389f315057ec0a85201907373bcd7582adeb9a6f356d1b36e03264dc9"} Jan 30 05:29:43 crc kubenswrapper[4931]: W0130 05:29:43.858080 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46ad7de9_e01d_414c_8a4d_9073ad986186.slice/crio-d1886781d10ecb06348f9abb9caccf69a1bc5841a942465cb5e9e6aa23d451dd WatchSource:0}: Error finding container d1886781d10ecb06348f9abb9caccf69a1bc5841a942465cb5e9e6aa23d451dd: Status 404 returned error can't find the container with id d1886781d10ecb06348f9abb9caccf69a1bc5841a942465cb5e9e6aa23d451dd Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.895771 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dvktv_4ba289fc-17e9-45e9-ac24-434d69045d97/openstack-network-exporter/0.log" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.895823 4931 generic.go:334] "Generic (PLEG): container finished" podID="4ba289fc-17e9-45e9-ac24-434d69045d97" containerID="82c70d68aab65fc3db72ee184a048732b17b72a09f49232810d0c430a261f1e7" exitCode=2 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.895916 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dvktv" event={"ID":"4ba289fc-17e9-45e9-ac24-434d69045d97","Type":"ContainerDied","Data":"82c70d68aab65fc3db72ee184a048732b17b72a09f49232810d0c430a261f1e7"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.913052 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9b8l8"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.943109 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f28f211b-be26-4f15-92a1-36b91cb53bbb/ovsdbserver-sb/0.log" Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.943515 4931 generic.go:334] "Generic (PLEG): container finished" podID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerID="4583c4efc80289000d7023b793d84dc55442d51907c690cc558e927738cb2e88" exitCode=2 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.943629 4931 generic.go:334] "Generic (PLEG): container finished" podID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerID="8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c" exitCode=143 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.943851 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f28f211b-be26-4f15-92a1-36b91cb53bbb","Type":"ContainerDied","Data":"4583c4efc80289000d7023b793d84dc55442d51907c690cc558e927738cb2e88"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.944038 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f28f211b-be26-4f15-92a1-36b91cb53bbb","Type":"ContainerDied","Data":"8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.951558 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9b8l8"] Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.970892 4931 generic.go:334] "Generic (PLEG): container finished" podID="e1f9790c-c395-4c72-b569-3140f703b56f" containerID="e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e" exitCode=0 Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.971055 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d9f6f6ff-kmswn" event={"ID":"e1f9790c-c395-4c72-b569-3140f703b56f","Type":"ContainerDied","Data":"e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e"} Jan 30 05:29:43 crc kubenswrapper[4931]: I0130 05:29:43.987959 4931 generic.go:334] "Generic (PLEG): container finished" podID="6b263e8e-7618-4044-bed1-b35174d6a8f4" containerID="998f26954c016e9a4be4fed72f68f879a1f7793c171311545d8f4958871325fd" exitCode=137 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.046650 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tjkcd"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.051678 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" containerID="cri-o://52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.053922 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:29:44 crc kubenswrapper[4931]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: if [ -n "barbican" ]; then Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="barbican" Jan 30 05:29:44 crc kubenswrapper[4931]: else Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="*" Jan 30 05:29:44 crc kubenswrapper[4931]: fi Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: # going for maximum compatibility here: Jan 30 05:29:44 crc kubenswrapper[4931]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 05:29:44 crc kubenswrapper[4931]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 05:29:44 crc kubenswrapper[4931]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 05:29:44 crc kubenswrapper[4931]: # support updates Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: $MYSQL_CMD < logger="UnhandledError" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.062688 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tjkcd"] Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.066079 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-cb5c-account-create-update-n52qj" podUID="46ad7de9-e01d-414c-8a4d-9073ad986186" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.070466 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-867d8cd54-77bnr"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.101696 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6xxt5"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.109501 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hqm5b"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.117390 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hqm5b"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.125474 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.125755 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-log" containerID="cri-o://754804bc268dc311547eddc996a035b132392b26798898d2ba034bc32dc1ee16" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.125962 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-httpd" containerID="cri-o://3509d69982e816f9732671b256bbf363b32c9a199362011499a8607bf3a6e808" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.131526 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.131733 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-log" containerID="cri-o://3795752efe01b170e153d47107ec186f09220104cdec5c71d0a210a63580f858" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.132106 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-httpd" containerID="cri-o://cc448c5e4a9d4def969b75156b3cc39bbccbe47f49a05ef9d15592b4643a809f" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.141231 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a921-account-create-update-mqpxv"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.169357 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a921-account-create-update-mqpxv"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.179016 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dvktv_4ba289fc-17e9-45e9-ac24-434d69045d97/openstack-network-exporter/0.log" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.179071 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.185782 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8ee9-account-create-update-c7rsn"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.196308 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-wtjbg"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.211827 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f28f211b-be26-4f15-92a1-36b91cb53bbb/ovsdbserver-sb/0.log" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.211888 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.212321 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-wtjbg"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.248595 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-9z9pd"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.268504 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-9z9pd"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.271432 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.294628 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-df05-account-create-update-nrbm4"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.319895 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4c2nt"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.328132 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4c2nt"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.334562 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0120-account-create-update-cj262"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.340444 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-cb5c-account-create-update-n52qj"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.350716 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vbzqc"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.362632 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vbzqc"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.362711 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.371271 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373083 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-combined-ca-bundle\") pod \"6b263e8e-7618-4044-bed1-b35174d6a8f4\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373130 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-scripts\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373152 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-combined-ca-bundle\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373168 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdb-rundir\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373229 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-combined-ca-bundle\") pod \"4ba289fc-17e9-45e9-ac24-434d69045d97\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373243 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovn-rundir\") pod \"4ba289fc-17e9-45e9-ac24-434d69045d97\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373262 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k758\" (UniqueName: \"kubernetes.io/projected/6b263e8e-7618-4044-bed1-b35174d6a8f4-kube-api-access-5k758\") pod \"6b263e8e-7618-4044-bed1-b35174d6a8f4\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9mdl\" (UniqueName: \"kubernetes.io/projected/f28f211b-be26-4f15-92a1-36b91cb53bbb-kube-api-access-k9mdl\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373341 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-metrics-certs-tls-certs\") pod \"4ba289fc-17e9-45e9-ac24-434d69045d97\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.373381 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-config\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.379496 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba289fc-17e9-45e9-ac24-434d69045d97-config\") pod \"4ba289fc-17e9-45e9-ac24-434d69045d97\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.379553 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config-secret\") pod \"6b263e8e-7618-4044-bed1-b35174d6a8f4\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.379588 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config\") pod \"6b263e8e-7618-4044-bed1-b35174d6a8f4\" (UID: \"6b263e8e-7618-4044-bed1-b35174d6a8f4\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.379611 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6ntv\" (UniqueName: \"kubernetes.io/projected/4ba289fc-17e9-45e9-ac24-434d69045d97-kube-api-access-j6ntv\") pod \"4ba289fc-17e9-45e9-ac24-434d69045d97\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.379654 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovs-rundir\") pod \"4ba289fc-17e9-45e9-ac24-434d69045d97\" (UID: \"4ba289fc-17e9-45e9-ac24-434d69045d97\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.380376 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xvdtt"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.380440 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xvdtt"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.381159 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba289fc-17e9-45e9-ac24-434d69045d97-config" (OuterVolumeSpecName: "config") pod "4ba289fc-17e9-45e9-ac24-434d69045d97" (UID: "4ba289fc-17e9-45e9-ac24-434d69045d97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.390496 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.390739 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-log" containerID="cri-o://e5dadd497214a5d2efc5b8027947f3661f7f73599b0778570358c42329955e8d" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.390964 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-config" (OuterVolumeSpecName: "config") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.391146 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-api" containerID="cri-o://d70e15b0f074e59f1a9f39048c2cf45a62e81400091cb70df139030d514fe003" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.392942 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-ntbbc"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.393518 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "4ba289fc-17e9-45e9-ac24-434d69045d97" (UID: "4ba289fc-17e9-45e9-ac24-434d69045d97"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.393627 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdbserver-sb-tls-certs\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.393670 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-metrics-certs-tls-certs\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.393811 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28f211b-be26-4f15-92a1-36b91cb53bbb-kube-api-access-k9mdl" (OuterVolumeSpecName: "kube-api-access-k9mdl") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "kube-api-access-k9mdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.393832 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f28f211b-be26-4f15-92a1-36b91cb53bbb\" (UID: \"f28f211b-be26-4f15-92a1-36b91cb53bbb\") " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.394760 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9mdl\" (UniqueName: \"kubernetes.io/projected/f28f211b-be26-4f15-92a1-36b91cb53bbb-kube-api-access-k9mdl\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.394833 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.394890 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ba289fc-17e9-45e9-ac24-434d69045d97-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.394945 4931 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.395233 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.401322 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "4ba289fc-17e9-45e9-ac24-434d69045d97" (UID: "4ba289fc-17e9-45e9-ac24-434d69045d97"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.401346 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.401541 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-scripts" (OuterVolumeSpecName: "scripts") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.402211 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-x4mqp"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.407938 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-x4mqp"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.413811 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-b25zb"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.414039 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b263e8e-7618-4044-bed1-b35174d6a8f4-kube-api-access-5k758" (OuterVolumeSpecName: "kube-api-access-5k758") pod "6b263e8e-7618-4044-bed1-b35174d6a8f4" (UID: "6b263e8e-7618-4044-bed1-b35174d6a8f4"). InnerVolumeSpecName "kube-api-access-5k758". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.420043 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.420959 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-log" containerID="cri-o://a268ff4ead170d4fc7c25a89e846ed2d0f10278b94da4082529cc4ebd9ab4f0e" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.421028 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-metadata" containerID="cri-o://6c90254ae67ae50ab19fa555ce55d1839d94322882a09bb91ad616b62efcfeeb" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.423733 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba289fc-17e9-45e9-ac24-434d69045d97-kube-api-access-j6ntv" (OuterVolumeSpecName: "kube-api-access-j6ntv") pod "4ba289fc-17e9-45e9-ac24-434d69045d97" (UID: "4ba289fc-17e9-45e9-ac24-434d69045d97"). InnerVolumeSpecName "kube-api-access-j6ntv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.425911 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7789bbd757-45b5w"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.434028 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d69b6c966-npv8t"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.434312 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d69b6c966-npv8t" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api-log" containerID="cri-o://0e0a199cc977b5213010336cc2b6c461a3916b61b6c3d9f6dc8eecc7d5c8d17e" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.434855 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d69b6c966-npv8t" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api" containerID="cri-o://44392d9ac535d9a3ce2ca47aa88e680823c3197a2e50d537aa67df4b03e52fd1" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.437645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.437986 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.445806 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.445962 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.481464 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-867d8cd54-77bnr"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.512076 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" containerID="cri-o://ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" gracePeriod=29 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.518187 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6ntv\" (UniqueName: \"kubernetes.io/projected/4ba289fc-17e9-45e9-ac24-434d69045d97-kube-api-access-j6ntv\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.518235 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.518246 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f28f211b-be26-4f15-92a1-36b91cb53bbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.518261 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.518271 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4ba289fc-17e9-45e9-ac24-434d69045d97-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.518281 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k758\" (UniqueName: \"kubernetes.io/projected/6b263e8e-7618-4044-bed1-b35174d6a8f4-kube-api-access-5k758\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.530110 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f5d456c6b-66jxb"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.530370 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener-log" containerID="cri-o://f0f483180c30bc672edc2e00c840d52567eb9b0c61f8c285d3a7c2a185f38020" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.530871 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener" containerID="cri-o://1bd0c14353cbfd196f658cae7f7167624a1cc818a0ca23ec5151f1c871a22e65" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.553802 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7c996f77-c9rqm"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.554147 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7c996f77-c9rqm" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker-log" containerID="cri-o://4ec5d987a513f3f04cf30f8d242bd5ee734a2387c5a279b070c18b72f4a56519" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.556598 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7c996f77-c9rqm" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker" containerID="cri-o://2c58ff417f0ff91cddebc47f633febd5ac50ac3ddd97dcc2fce28574c94ac8a6" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.580957 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-cbdc6b6c8-m9v7c"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.586168 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6b263e8e-7618-4044-bed1-b35174d6a8f4" (UID: "6b263e8e-7618-4044-bed1-b35174d6a8f4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.589525 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.589745 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2565fa42-f180-4948-8b2f-68c419d78d2b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.592072 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data kube-api-access-t9tkc], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/barbican-api-cbdc6b6c8-m9v7c" podUID="98fff7bd-db4c-462f-8f2c-34733f4e81ad" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.594548 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7789bbd757-45b5w"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.608331 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-cb5c-account-create-update-n52qj"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.614299 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.614903 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="9bb44c01-e79f-42d8-912c-66db07c6b328" containerName="nova-cell1-conductor-conductor" containerID="cri-o://9aac5eb9a7735bf1efc6d134170c4297691e32c19fa1a2cd01ab0ae918243436" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.622262 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.626936 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhn9j"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.631685 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jhn9j"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.639464 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s287f"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.646124 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.646362 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="88988b92-cd64-490d-b55f-959ecf4095af" containerName="nova-cell0-conductor-conductor" containerID="cri-o://83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.646680 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-s287f"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.667172 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-ntbbc"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.677475 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8ee9-account-create-update-c7rsn"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.682457 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-df05-account-create-update-nrbm4"] Jan 30 05:29:44 crc kubenswrapper[4931]: W0130 05:29:44.686215 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ef60747_e73b_451c_b8e1_6abd596d31bb.slice/crio-5f23441d1b937de628ce230831d40e43097271e249db0186baccaa1a1137dc00 WatchSource:0}: Error finding container 5f23441d1b937de628ce230831d40e43097271e249db0186baccaa1a1137dc00: Status 404 returned error can't find the container with id 5f23441d1b937de628ce230831d40e43097271e249db0186baccaa1a1137dc00 Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.716467 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:29:44 crc kubenswrapper[4931]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: if [ -n "cinder" ]; then Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="cinder" Jan 30 05:29:44 crc kubenswrapper[4931]: else Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="*" Jan 30 05:29:44 crc kubenswrapper[4931]: fi Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: # going for maximum compatibility here: Jan 30 05:29:44 crc kubenswrapper[4931]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 05:29:44 crc kubenswrapper[4931]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 05:29:44 crc kubenswrapper[4931]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 05:29:44 crc kubenswrapper[4931]: # support updates Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: $MYSQL_CMD < logger="UnhandledError" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.716959 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:29:44 crc kubenswrapper[4931]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: if [ -n "nova_cell0" ]; then Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="nova_cell0" Jan 30 05:29:44 crc kubenswrapper[4931]: else Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="*" Jan 30 05:29:44 crc kubenswrapper[4931]: fi Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: # going for maximum compatibility here: Jan 30 05:29:44 crc kubenswrapper[4931]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 05:29:44 crc kubenswrapper[4931]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 05:29:44 crc kubenswrapper[4931]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 05:29:44 crc kubenswrapper[4931]: # support updates Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: $MYSQL_CMD < logger="UnhandledError" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.717279 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:29:44 crc kubenswrapper[4931]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: if [ -n "glance" ]; then Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="glance" Jan 30 05:29:44 crc kubenswrapper[4931]: else Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="*" Jan 30 05:29:44 crc kubenswrapper[4931]: fi Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: # going for maximum compatibility here: Jan 30 05:29:44 crc kubenswrapper[4931]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 05:29:44 crc kubenswrapper[4931]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 05:29:44 crc kubenswrapper[4931]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 05:29:44 crc kubenswrapper[4931]: # support updates Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: $MYSQL_CMD < logger="UnhandledError" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.718586 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-df05-account-create-update-nrbm4" podUID="7ef60747-e73b-451c-b8e1-6abd596d31bb" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.718653 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-8ee9-account-create-update-c7rsn" podUID="5d4d7097-4e75-41cb-b451-6feb8e2184b9" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.718690 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" podUID="f493e630-c604-4fd1-9fa6-f26d6d1a179a" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.743363 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-b25zb"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.779455 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.782146 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.782322 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1acfa9c2-a802-404e-976b-93d9f99e1fbb" containerName="nova-scheduler-scheduler" containerID="cri-o://cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.824051 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21 is running failed: container process not found" containerID="ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.824533 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21 is running failed: container process not found" containerID="ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.824685 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21 is running failed: container process not found" containerID="ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.824704 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="ovsdbserver-nb" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.832855 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-76fb878d5c-s22sw"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.833511 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.833104 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-76fb878d5c-s22sw" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-httpd" containerID="cri-o://3baca7478354f0fb4066c265761c1fd4465b993347daea084eeb2d40cd40bed6" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.833713 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-76fb878d5c-s22sw" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-server" containerID="cri-o://02a426537f79889d684c812318ea1dd0bd0af03a098fb1e7d47cd94e43353e1c" gracePeriod=30 Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.851250 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0120-account-create-update-cj262"] Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.916707 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="081e3873-ea99-4486-925f-784a98e49405" containerName="rabbitmq" containerID="cri-o://1bb7e19530d33f5a4cf134ca5c6644743c868cc750cac0c1bb313f0f47240dd8" gracePeriod=604800 Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.917442 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:29:44 crc kubenswrapper[4931]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: if [ -n "nova_cell1" ]; then Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="nova_cell1" Jan 30 05:29:44 crc kubenswrapper[4931]: else Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="*" Jan 30 05:29:44 crc kubenswrapper[4931]: fi Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: # going for maximum compatibility here: Jan 30 05:29:44 crc kubenswrapper[4931]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 05:29:44 crc kubenswrapper[4931]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 05:29:44 crc kubenswrapper[4931]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 05:29:44 crc kubenswrapper[4931]: # support updates Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: $MYSQL_CMD < logger="UnhandledError" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.918194 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:29:44 crc kubenswrapper[4931]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: if [ -n "nova_api" ]; then Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="nova_api" Jan 30 05:29:44 crc kubenswrapper[4931]: else Jan 30 05:29:44 crc kubenswrapper[4931]: GRANT_DATABASE="*" Jan 30 05:29:44 crc kubenswrapper[4931]: fi Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: # going for maximum compatibility here: Jan 30 05:29:44 crc kubenswrapper[4931]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 05:29:44 crc kubenswrapper[4931]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 05:29:44 crc kubenswrapper[4931]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 05:29:44 crc kubenswrapper[4931]: # support updates Jan 30 05:29:44 crc kubenswrapper[4931]: Jan 30 05:29:44 crc kubenswrapper[4931]: $MYSQL_CMD < logger="UnhandledError" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.919032 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-326d-account-create-update-b25zb" podUID="2b6b4ccf-805f-463c-b8c9-d975fd2a9059" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.922990 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-0120-account-create-update-cj262" podUID="d13136a7-4633-4386-822d-ceb2cb3320b8" Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.934802 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 05:29:44 crc kubenswrapper[4931]: E0130 05:29:44.934883 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data podName:fc3f4796-66b1-452b-afca-5e62cbf2a53b nodeName:}" failed. No retries permitted until 2026-01-30 05:29:48.934869998 +0000 UTC m=+1324.304780255 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data") pod "rabbitmq-server-0" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b") : configmap "rabbitmq-config-data" not found Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.983574 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ba289fc-17e9-45e9-ac24-434d69045d97" (UID: "4ba289fc-17e9-45e9-ac24-434d69045d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:44 crc kubenswrapper[4931]: I0130 05:29:44.985637 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.006571 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b263e8e-7618-4044-bed1-b35174d6a8f4" (UID: "6b263e8e-7618-4044-bed1-b35174d6a8f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.008320 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dvktv_4ba289fc-17e9-45e9-ac24-434d69045d97/openstack-network-exporter/0.log" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.008389 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dvktv" event={"ID":"4ba289fc-17e9-45e9-ac24-434d69045d97","Type":"ContainerDied","Data":"39a86ec198f21c9ed97c5b274927fc46f2f6f56ea606ee080f8268afe4d2241b"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.008473 4931 scope.go:117] "RemoveContainer" containerID="82c70d68aab65fc3db72ee184a048732b17b72a09f49232810d0c430a261f1e7" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.008594 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dvktv" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.025886 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6b263e8e-7618-4044-bed1-b35174d6a8f4" (UID: "6b263e8e-7618-4044-bed1-b35174d6a8f4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.032789 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ee9-account-create-update-c7rsn" event={"ID":"5d4d7097-4e75-41cb-b451-6feb8e2184b9","Type":"ContainerStarted","Data":"f0d507bce832298463e6a094cc7b0f7eb6c19d37e2a1f9f33913556dc5ffc1c1"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.034067 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerName="galera" containerID="cri-o://1c39e215a63df32503630d9061f8096755ed069b7c32eec93d18140193ca977b" gracePeriod=30 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.040714 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "4ba289fc-17e9-45e9-ac24-434d69045d97" (UID: "4ba289fc-17e9-45e9-ac24-434d69045d97"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.056991 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.057022 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.057032 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.057040 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ba289fc-17e9-45e9-ac24-434d69045d97-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.057054 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b263e8e-7618-4044-bed1-b35174d6a8f4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.057522 4931 generic.go:334] "Generic (PLEG): container finished" podID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerID="1d733edb3ceaca43f34355e23bbaaced9e55a731057ead7b89c96398337d6e11" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.057621 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798b7dc5fb-xl2zq" event={"ID":"ebe4f743-9a60-428f-8b58-14ba160d9fd7","Type":"ContainerDied","Data":"1d733edb3ceaca43f34355e23bbaaced9e55a731057ead7b89c96398337d6e11"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.064481 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.081412 4931 generic.go:334] "Generic (PLEG): container finished" podID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.081482 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-thxc2" event={"ID":"5732e34e-6330-4a36-9082-dbb50eede9f2","Type":"ContainerDied","Data":"ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.091487 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" event={"ID":"f493e630-c604-4fd1-9fa6-f26d6d1a179a","Type":"ContainerStarted","Data":"34afa4a36598164bccdcad1293ddead8d7610abc3c9551b334f25c08a708b5f9"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.103341 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f28f211b-be26-4f15-92a1-36b91cb53bbb" (UID: "f28f211b-be26-4f15-92a1-36b91cb53bbb"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.158906 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.158935 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f28f211b-be26-4f15-92a1-36b91cb53bbb-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166834 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="fa1a93081b269f4ada317cd8046ebb7f1a7c1edf1f6e97c13ed393eacd7e1973" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166870 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="cf71a5f4e5a8611b3edb8a350ba0e2eedbd78c0fb76770c94841152df4a3ab69" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166879 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="577bb47efe5f44d38e3c888fdf879028a229599eb11554a344d6a077afa58802" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166888 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="2aa8176e0269c78ed82e92b582f8a0a44311ad87daa1079e948a15315c72207f" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166895 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="cc9cceab2cf461936102038fbf1707f4be2e195decb5808193c7f51c4adb08d3" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166903 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="b4e57fcd32132c1d41dc41783803f5dfdbd53a0317437ca189732c5c62a33471" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166909 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="6088dfc85b1e09a936dd16faccae994e80e2ccb29840c2c09302842b83328fc2" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166918 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="072ac216076cb0fec2ec21789975f4b6fdf297b846d9774f980cb280a52a2718" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166924 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="840bb675eae49d372214aa49017516eb7cc03feb5f0cebb6fb56a2dd4d0837b9" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166930 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="01987d0b4f025a347544f55c09ea6cf4f3249f746d37222f35ee196eb4525b63" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166937 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="9eac75907fb7af02eb9159bedaf64c4ca7dee04ca441b549c3a48132b186515f" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166942 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="64945c3ef451f83b413801249e5cc8cedcb622a5a129ee80defec98e393eed29" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166948 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="de7939eb8c76f478b34a03e7dd08f4a308dc8bbb63a287db0a5b3eec8794cc7c" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166954 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="e78b14aff0684ab7de691a18e86ce169b9e67b8f02342d8eba7927de4cb39ec6" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.166994 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"fa1a93081b269f4ada317cd8046ebb7f1a7c1edf1f6e97c13ed393eacd7e1973"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167019 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"cf71a5f4e5a8611b3edb8a350ba0e2eedbd78c0fb76770c94841152df4a3ab69"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167029 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"577bb47efe5f44d38e3c888fdf879028a229599eb11554a344d6a077afa58802"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167037 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"2aa8176e0269c78ed82e92b582f8a0a44311ad87daa1079e948a15315c72207f"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167045 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"cc9cceab2cf461936102038fbf1707f4be2e195decb5808193c7f51c4adb08d3"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167056 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"b4e57fcd32132c1d41dc41783803f5dfdbd53a0317437ca189732c5c62a33471"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167065 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"6088dfc85b1e09a936dd16faccae994e80e2ccb29840c2c09302842b83328fc2"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167073 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"072ac216076cb0fec2ec21789975f4b6fdf297b846d9774f980cb280a52a2718"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167081 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"840bb675eae49d372214aa49017516eb7cc03feb5f0cebb6fb56a2dd4d0837b9"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167089 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"01987d0b4f025a347544f55c09ea6cf4f3249f746d37222f35ee196eb4525b63"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167098 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"9eac75907fb7af02eb9159bedaf64c4ca7dee04ca441b549c3a48132b186515f"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167105 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"64945c3ef451f83b413801249e5cc8cedcb622a5a129ee80defec98e393eed29"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167113 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"de7939eb8c76f478b34a03e7dd08f4a308dc8bbb63a287db0a5b3eec8794cc7c"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.167121 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"e78b14aff0684ab7de691a18e86ce169b9e67b8f02342d8eba7927de4cb39ec6"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.168746 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_49a63fb4-24bc-4834-b6e7-937688c5de09/ovsdbserver-nb/0.log" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.168793 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"49a63fb4-24bc-4834-b6e7-937688c5de09","Type":"ContainerDied","Data":"d92e6599441617be3a228c318ab5084a192dbb4da1df24b47362bc9f2366da37"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.168810 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92e6599441617be3a228c318ab5084a192dbb4da1df24b47362bc9f2366da37" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.169525 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df05-account-create-update-nrbm4" event={"ID":"7ef60747-e73b-451c-b8e1-6abd596d31bb","Type":"ContainerStarted","Data":"5f23441d1b937de628ce230831d40e43097271e249db0186baccaa1a1137dc00"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.178892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7789bbd757-45b5w" event={"ID":"c0c7aeee-9023-433a-83d0-aa0e9942a0ed","Type":"ContainerStarted","Data":"eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.178935 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7789bbd757-45b5w" event={"ID":"c0c7aeee-9023-433a-83d0-aa0e9942a0ed","Type":"ContainerStarted","Data":"b30436eda9ab254987a1049643d0e45f01f12d10b3f44f43863aa93c4c7ce86b"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.194207 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0120-account-create-update-cj262" event={"ID":"d13136a7-4633-4386-822d-ceb2cb3320b8","Type":"ContainerStarted","Data":"94259a3980ca06eabd57f602644b7974c5802e08901f053b5b514caf5639d01b"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.214029 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cb5c-account-create-update-n52qj" event={"ID":"46ad7de9-e01d-414c-8a4d-9073ad986186","Type":"ContainerStarted","Data":"d1886781d10ecb06348f9abb9caccf69a1bc5841a942465cb5e9e6aa23d451dd"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.242595 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-326d-account-create-update-b25zb" event={"ID":"2b6b4ccf-805f-463c-b8c9-d975fd2a9059","Type":"ContainerStarted","Data":"c0d032c4bd8a6102c282961d37b4968dfb10aaf972fe8b42cd15f5070c0f0f3a"} Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.256008 4931 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 05:29:45 crc kubenswrapper[4931]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 30 05:29:45 crc kubenswrapper[4931]: Jan 30 05:29:45 crc kubenswrapper[4931]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 30 05:29:45 crc kubenswrapper[4931]: Jan 30 05:29:45 crc kubenswrapper[4931]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 30 05:29:45 crc kubenswrapper[4931]: Jan 30 05:29:45 crc kubenswrapper[4931]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 30 05:29:45 crc kubenswrapper[4931]: Jan 30 05:29:45 crc kubenswrapper[4931]: if [ -n "barbican" ]; then Jan 30 05:29:45 crc kubenswrapper[4931]: GRANT_DATABASE="barbican" Jan 30 05:29:45 crc kubenswrapper[4931]: else Jan 30 05:29:45 crc kubenswrapper[4931]: GRANT_DATABASE="*" Jan 30 05:29:45 crc kubenswrapper[4931]: fi Jan 30 05:29:45 crc kubenswrapper[4931]: Jan 30 05:29:45 crc kubenswrapper[4931]: # going for maximum compatibility here: Jan 30 05:29:45 crc kubenswrapper[4931]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 30 05:29:45 crc kubenswrapper[4931]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 30 05:29:45 crc kubenswrapper[4931]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 30 05:29:45 crc kubenswrapper[4931]: # support updates Jan 30 05:29:45 crc kubenswrapper[4931]: Jan 30 05:29:45 crc kubenswrapper[4931]: $MYSQL_CMD < logger="UnhandledError" Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.276731 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-cb5c-account-create-update-n52qj" podUID="46ad7de9-e01d-414c-8a4d-9073ad986186" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.361977 4931 generic.go:334] "Generic (PLEG): container finished" podID="7c0ddaec-4521-4898-8649-262b52f24acb" containerID="754804bc268dc311547eddc996a035b132392b26798898d2ba034bc32dc1ee16" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.362290 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c0ddaec-4521-4898-8649-262b52f24acb","Type":"ContainerDied","Data":"754804bc268dc311547eddc996a035b132392b26798898d2ba034bc32dc1ee16"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.375759 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.397754 4931 generic.go:334] "Generic (PLEG): container finished" podID="3415cfc4-a71a-4110-bf82-295181bb386f" containerID="3795752efe01b170e153d47107ec186f09220104cdec5c71d0a210a63580f858" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.397836 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3415cfc4-a71a-4110-bf82-295181bb386f","Type":"ContainerDied","Data":"3795752efe01b170e153d47107ec186f09220104cdec5c71d0a210a63580f858"} Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.401981 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_49a63fb4-24bc-4834-b6e7-937688c5de09/ovsdbserver-nb/0.log" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.402039 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.407403 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.424842 4931 scope.go:117] "RemoveContainer" containerID="998f26954c016e9a4be4fed72f68f879a1f7793c171311545d8f4958871325fd" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.484750 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-nb\") pod \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.484831 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-swift-storage-0\") pod \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.484863 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-config\") pod \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.484890 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-scripts\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.484942 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.484974 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-config\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485001 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-sb\") pod \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485048 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdbserver-nb-tls-certs\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485073 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdb-rundir\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485156 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrnld\" (UniqueName: \"kubernetes.io/projected/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-kube-api-access-nrnld\") pod \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485198 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f2l8\" (UniqueName: \"kubernetes.io/projected/49a63fb4-24bc-4834-b6e7-937688c5de09-kube-api-access-6f2l8\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485239 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-svc\") pod \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\" (UID: \"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485279 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-metrics-certs-tls-certs\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.485333 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-combined-ca-bundle\") pod \"49a63fb4-24bc-4834-b6e7-937688c5de09\" (UID: \"49a63fb4-24bc-4834-b6e7-937688c5de09\") " Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.505819 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f28f211b-be26-4f15-92a1-36b91cb53bbb/ovsdbserver-sb/0.log" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.506044 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.506761 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-scripts" (OuterVolumeSpecName: "scripts") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.540449 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.561165 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-kube-api-access-nrnld" (OuterVolumeSpecName: "kube-api-access-nrnld") pod "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" (UID: "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d"). InnerVolumeSpecName "kube-api-access-nrnld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.569279 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-config" (OuterVolumeSpecName: "config") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.588640 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.588671 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.588683 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrnld\" (UniqueName: \"kubernetes.io/projected/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-kube-api-access-nrnld\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.588692 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/49a63fb4-24bc-4834-b6e7-937688c5de09-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.615205 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b263e8e-7618-4044-bed1-b35174d6a8f4" path="/var/lib/kubelet/pods/6b263e8e-7618-4044-bed1-b35174d6a8f4/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.617065 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bdb7d70-31a9-4d52-aae0-072e8c62a23f" path="/var/lib/kubelet/pods/6bdb7d70-31a9-4d52-aae0-072e8c62a23f/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.618404 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee75b9c-df74-490e-94ff-21eacce0b65a" path="/var/lib/kubelet/pods/6ee75b9c-df74-490e-94ff-21eacce0b65a/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.619019 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9262fbc3-2503-4252-b2dd-10cd8dcfbd6f" path="/var/lib/kubelet/pods/9262fbc3-2503-4252-b2dd-10cd8dcfbd6f/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.620380 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9ebe73-0201-4486-9de9-e8828e84de53" path="/var/lib/kubelet/pods/9b9ebe73-0201-4486-9de9-e8828e84de53/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.620997 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a8f8fe-306a-4373-bbb0-d96f2b498d62" path="/var/lib/kubelet/pods/b0a8f8fe-306a-4373-bbb0-d96f2b498d62/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.621861 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b14699-8089-4af7-b0bd-654a8fda9715" path="/var/lib/kubelet/pods/c3b14699-8089-4af7-b0bd-654a8fda9715/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.621888 4931 generic.go:334] "Generic (PLEG): container finished" podID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerID="e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e" exitCode=0 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.622118 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.622974 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae14e96-e869-491f-bbab-32bccf87cc10" path="/var/lib/kubelet/pods/cae14e96-e869-491f-bbab-32bccf87cc10/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.623545 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a63fb4-24bc-4834-b6e7-937688c5de09-kube-api-access-6f2l8" (OuterVolumeSpecName: "kube-api-access-6f2l8") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "kube-api-access-6f2l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.623604 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cbb37a-882a-46cf-9cee-0543ac708004" path="/var/lib/kubelet/pods/d5cbb37a-882a-46cf-9cee-0543ac708004/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.623752 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.624857 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da1ef5f2-7d57-4f89-9b48-9c603b322e5e" path="/var/lib/kubelet/pods/da1ef5f2-7d57-4f89-9b48-9c603b322e5e/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.640128 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6b82f5-5c39-4101-b9f8-05aaf9547a0b" path="/var/lib/kubelet/pods/df6b82f5-5c39-4101-b9f8-05aaf9547a0b/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.642053 4931 generic.go:334] "Generic (PLEG): container finished" podID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerID="a268ff4ead170d4fc7c25a89e846ed2d0f10278b94da4082529cc4ebd9ab4f0e" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.667071 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65373ae-84e0-4338-be4c-8cc8bd2d3fb0" path="/var/lib/kubelet/pods/e65373ae-84e0-4338-be4c-8cc8bd2d3fb0/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.668382 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8624816-8c2c-4d9c-b3a5-426253850926" path="/var/lib/kubelet/pods/e8624816-8c2c-4d9c-b3a5-426253850926/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.669023 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac7a7da-7577-4269-8e37-fd964be6f75c" path="/var/lib/kubelet/pods/fac7a7da-7577-4269-8e37-fd964be6f75c/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.669642 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b" path="/var/lib/kubelet/pods/ff26ff0b-dc1f-4605-bf81-1a5dcde91d1b/volumes" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.689581 4931 generic.go:334] "Generic (PLEG): container finished" podID="58928fea-709c-44d8-bd12-23937da8e2c4" containerID="0e0a199cc977b5213010336cc2b6c461a3916b61b6c3d9f6dc8eecc7d5c8d17e" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.691676 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9tkc\" (UniqueName: \"kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.691961 4931 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.691882 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data\") pod \"barbican-api-cbdc6b6c8-m9v7c\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.692024 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:49.692009514 +0000 UTC m=+1325.061919771 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : secret "barbican-config-data" not found Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.692327 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.692365 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data podName:081e3873-ea99-4486-925f-784a98e49405 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:49.692352543 +0000 UTC m=+1325.062262800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data") pod "rabbitmq-cell1-server-0" (UID: "081e3873-ea99-4486-925f-784a98e49405") : configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.692472 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f2l8\" (UniqueName: \"kubernetes.io/projected/49a63fb4-24bc-4834-b6e7-937688c5de09-kube-api-access-6f2l8\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.692496 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.694302 4931 projected.go:194] Error preparing data for projected volume kube-api-access-t9tkc for pod openstack/barbican-api-cbdc6b6c8-m9v7c: failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:45 crc kubenswrapper[4931]: E0130 05:29:45.694343 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc podName:98fff7bd-db4c-462f-8f2c-34733f4e81ad nodeName:}" failed. No retries permitted until 2026-01-30 05:29:49.694333949 +0000 UTC m=+1325.064244206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t9tkc" (UniqueName: "kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc") pod "barbican-api-cbdc6b6c8-m9v7c" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad") : failed to fetch token: serviceaccounts "barbican-barbican" not found Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.710320 4931 generic.go:334] "Generic (PLEG): container finished" podID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerID="4ec5d987a513f3f04cf30f8d242bd5ee734a2387c5a279b070c18b72f4a56519" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.725434 4931 generic.go:334] "Generic (PLEG): container finished" podID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerID="e5dadd497214a5d2efc5b8027947f3661f7f73599b0778570358c42329955e8d" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.728923 4931 generic.go:334] "Generic (PLEG): container finished" podID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerID="f0f483180c30bc672edc2e00c840d52567eb9b0c61f8c285d3a7c2a185f38020" exitCode=143 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.746547 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.185:8776/healthcheck\": dial tcp 10.217.0.185:8776: connect: connection refused" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.783761 4931 generic.go:334] "Generic (PLEG): container finished" podID="623f3c8f-d741-4ba4-baca-905a13102f38" containerID="3a989cbeb7f4bf86d12831f4d3313ed2a342cf72d86cb0362de32fea4fb7324a" exitCode=1 Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.784285 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.785585 4931 scope.go:117] "RemoveContainer" containerID="3a989cbeb7f4bf86d12831f4d3313ed2a342cf72d86cb0362de32fea4fb7324a" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.906108 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" (UID: "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.909486 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 30 05:29:45 crc kubenswrapper[4931]: I0130 05:29:45.996187 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-config" (OuterVolumeSpecName: "config") pod "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" (UID: "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.004192 4931 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.004219 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.004228 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.016569 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.018472 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" (UID: "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.028097 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" (UID: "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.106414 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.106649 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.106661 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.110770 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.111094 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" (UID: "1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.146509 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "49a63fb4-24bc-4834-b6e7-937688c5de09" (UID: "49a63fb4-24bc-4834-b6e7-937688c5de09"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.207928 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.207954 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/49a63fb4-24bc-4834-b6e7-937688c5de09-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.207963 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:46 crc kubenswrapper[4931]: E0130 05:29:46.338405 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06 is running failed: container process not found" containerID="83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 05:29:46 crc kubenswrapper[4931]: E0130 05:29:46.338620 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06 is running failed: container process not found" containerID="83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 05:29:46 crc kubenswrapper[4931]: E0130 05:29:46.338796 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06 is running failed: container process not found" containerID="83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 05:29:46 crc kubenswrapper[4931]: E0130 05:29:46.338817 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="88988b92-cd64-490d-b55f-959ecf4095af" containerName="nova-cell0-conductor-conductor" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346237 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f28f211b-be26-4f15-92a1-36b91cb53bbb","Type":"ContainerDied","Data":"920213aded2a6124fc2a4c0ef0f31260bf1b62f8e7693371989b52df16882f74"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346376 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" event={"ID":"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d","Type":"ContainerDied","Data":"e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346391 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-ctzjd" event={"ID":"1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d","Type":"ContainerDied","Data":"645723e127490c600cf593cc161f0207c0a197195fa54096da51b7634ddd33ac"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346402 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74","Type":"ContainerDied","Data":"a268ff4ead170d4fc7c25a89e846ed2d0f10278b94da4082529cc4ebd9ab4f0e"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346415 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d69b6c966-npv8t" event={"ID":"58928fea-709c-44d8-bd12-23937da8e2c4","Type":"ContainerDied","Data":"0e0a199cc977b5213010336cc2b6c461a3916b61b6c3d9f6dc8eecc7d5c8d17e"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346462 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c996f77-c9rqm" event={"ID":"7729e2d8-6c8c-4759-9e5d-535ad1586f47","Type":"ContainerDied","Data":"4ec5d987a513f3f04cf30f8d242bd5ee734a2387c5a279b070c18b72f4a56519"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346476 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"406c25f3-c398-4ace-ba4b-1d9b48b289a2","Type":"ContainerDied","Data":"e5dadd497214a5d2efc5b8027947f3661f7f73599b0778570358c42329955e8d"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346493 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" event={"ID":"728a2e60-915e-4447-9465-aa64f7f5c7bb","Type":"ContainerDied","Data":"f0f483180c30bc672edc2e00c840d52567eb9b0c61f8c285d3a7c2a185f38020"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346563 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" event={"ID":"9ac55021-a07e-443f-9ee9-e7516556b975","Type":"ContainerStarted","Data":"1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346581 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xxt5" event={"ID":"623f3c8f-d741-4ba4-baca-905a13102f38","Type":"ContainerDied","Data":"3a989cbeb7f4bf86d12831f4d3313ed2a342cf72d86cb0362de32fea4fb7324a"} Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.346631 4931 scope.go:117] "RemoveContainer" containerID="4583c4efc80289000d7023b793d84dc55442d51907c690cc558e927738cb2e88" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.380007 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.414980 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98fff7bd-db4c-462f-8f2c-34733f4e81ad-logs\") pod \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.415030 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-internal-tls-certs\") pod \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.415085 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-public-tls-certs\") pod \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.415222 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data-custom\") pod \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.415292 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-combined-ca-bundle\") pod \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\" (UID: \"98fff7bd-db4c-462f-8f2c-34733f4e81ad\") " Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.417036 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98fff7bd-db4c-462f-8f2c-34733f4e81ad-logs" (OuterVolumeSpecName: "logs") pod "98fff7bd-db4c-462f-8f2c-34733f4e81ad" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.423312 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "98fff7bd-db4c-462f-8f2c-34733f4e81ad" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.424400 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "98fff7bd-db4c-462f-8f2c-34733f4e81ad" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.432124 4931 scope.go:117] "RemoveContainer" containerID="8f3ed5c70b0a0c2e85aa57cc8cb01dca94391f61dba6acbcb746da8a8225d47c" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.435792 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98fff7bd-db4c-462f-8f2c-34733f4e81ad" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:46 crc kubenswrapper[4931]: I0130 05:29:46.445016 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "98fff7bd-db4c-462f-8f2c-34733f4e81ad" (UID: "98fff7bd-db4c-462f-8f2c-34733f4e81ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.481636 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.487152 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.492124 4931 scope.go:117] "RemoveContainer" containerID="e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.513569 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.516237 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4flg\" (UniqueName: \"kubernetes.io/projected/5d4d7097-4e75-41cb-b451-6feb8e2184b9-kube-api-access-q4flg\") pod \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.516505 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4d7097-4e75-41cb-b451-6feb8e2184b9-operator-scripts\") pod \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\" (UID: \"5d4d7097-4e75-41cb-b451-6feb8e2184b9\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.516920 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.516933 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.516941 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98fff7bd-db4c-462f-8f2c-34733f4e81ad-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.516951 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.516962 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.519378 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4d7097-4e75-41cb-b451-6feb8e2184b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d4d7097-4e75-41cb-b451-6feb8e2184b9" (UID: "5d4d7097-4e75-41cb-b451-6feb8e2184b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.530537 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4d7097-4e75-41cb-b451-6feb8e2184b9-kube-api-access-q4flg" (OuterVolumeSpecName: "kube-api-access-q4flg") pod "5d4d7097-4e75-41cb-b451-6feb8e2184b9" (UID: "5d4d7097-4e75-41cb-b451-6feb8e2184b9"). InnerVolumeSpecName "kube-api-access-q4flg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.533586 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-dvktv"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.542091 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-dvktv"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.549027 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-ctzjd"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.554763 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-ctzjd"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.621543 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4d7097-4e75-41cb-b451-6feb8e2184b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.621570 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4flg\" (UniqueName: \"kubernetes.io/projected/5d4d7097-4e75-41cb-b451-6feb8e2184b9-kube-api-access-q4flg\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.735169 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.753158 4931 scope.go:117] "RemoveContainer" containerID="d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.769883 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.781582 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.813783 4931 generic.go:334] "Generic (PLEG): container finished" podID="98d21216-5a0f-422c-9642-0ea353a33e82" containerID="02a426537f79889d684c812318ea1dd0bd0af03a098fb1e7d47cd94e43353e1c" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.813815 4931 generic.go:334] "Generic (PLEG): container finished" podID="98d21216-5a0f-422c-9642-0ea353a33e82" containerID="3baca7478354f0fb4066c265761c1fd4465b993347daea084eeb2d40cd40bed6" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.813880 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76fb878d5c-s22sw" event={"ID":"98d21216-5a0f-422c-9642-0ea353a33e82","Type":"ContainerDied","Data":"02a426537f79889d684c812318ea1dd0bd0af03a098fb1e7d47cd94e43353e1c"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.813940 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76fb878d5c-s22sw" event={"ID":"98d21216-5a0f-422c-9642-0ea353a33e82","Type":"ContainerDied","Data":"3baca7478354f0fb4066c265761c1fd4465b993347daea084eeb2d40cd40bed6"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.813959 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-76fb878d5c-s22sw" event={"ID":"98d21216-5a0f-422c-9642-0ea353a33e82","Type":"ContainerDied","Data":"68d0e2dfe8dc67ba7ff79544ecf0a950e34ec34379d61e5a1edf698fb315e6f7"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.813972 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d0e2dfe8dc67ba7ff79544ecf0a950e34ec34379d61e5a1edf698fb315e6f7" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.822589 4931 generic.go:334] "Generic (PLEG): container finished" podID="88988b92-cd64-490d-b55f-959ecf4095af" containerID="83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.822648 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"88988b92-cd64-490d-b55f-959ecf4095af","Type":"ContainerDied","Data":"83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.822678 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"88988b92-cd64-490d-b55f-959ecf4095af","Type":"ContainerDied","Data":"56a8c3403b77c67382071da65bd384ea85d43f4776ebb7971c9a14fd4e392984"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.822689 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56a8c3403b77c67382071da65bd384ea85d43f4776ebb7971c9a14fd4e392984" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830161 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc6vk\" (UniqueName: \"kubernetes.io/projected/f493e630-c604-4fd1-9fa6-f26d6d1a179a-kube-api-access-zc6vk\") pod \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830292 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvzsw\" (UniqueName: \"kubernetes.io/projected/2565fa42-f180-4948-8b2f-68c419d78d2b-kube-api-access-qvzsw\") pod \"2565fa42-f180-4948-8b2f-68c419d78d2b\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830333 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhwl\" (UniqueName: \"kubernetes.io/projected/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-kube-api-access-swhwl\") pod \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830349 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-combined-ca-bundle\") pod \"2565fa42-f180-4948-8b2f-68c419d78d2b\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-vencrypt-tls-certs\") pod \"2565fa42-f180-4948-8b2f-68c419d78d2b\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830434 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-nova-novncproxy-tls-certs\") pod \"2565fa42-f180-4948-8b2f-68c419d78d2b\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830462 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-operator-scripts\") pod \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\" (UID: \"2b6b4ccf-805f-463c-b8c9-d975fd2a9059\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830488 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f493e630-c604-4fd1-9fa6-f26d6d1a179a-operator-scripts\") pod \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\" (UID: \"f493e630-c604-4fd1-9fa6-f26d6d1a179a\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.830557 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-config-data\") pod \"2565fa42-f180-4948-8b2f-68c419d78d2b\" (UID: \"2565fa42-f180-4948-8b2f-68c419d78d2b\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.841078 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b6b4ccf-805f-463c-b8c9-d975fd2a9059" (UID: "2b6b4ccf-805f-463c-b8c9-d975fd2a9059"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.841257 4931 generic.go:334] "Generic (PLEG): container finished" podID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerID="9398f7e713fb447b3a151d286be2d2910e4d8535fd421e906c46b8cc2c9a4728" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.841328 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd","Type":"ContainerDied","Data":"9398f7e713fb447b3a151d286be2d2910e4d8535fd421e906c46b8cc2c9a4728"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.841369 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd","Type":"ContainerDied","Data":"300c4ac1a78a0898043a5bb9c0ea1e976d3646b2689510ee5ed5d0a93470d249"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.841381 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300c4ac1a78a0898043a5bb9c0ea1e976d3646b2689510ee5ed5d0a93470d249" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.844625 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2565fa42-f180-4948-8b2f-68c419d78d2b-kube-api-access-qvzsw" (OuterVolumeSpecName: "kube-api-access-qvzsw") pod "2565fa42-f180-4948-8b2f-68c419d78d2b" (UID: "2565fa42-f180-4948-8b2f-68c419d78d2b"). InnerVolumeSpecName "kube-api-access-qvzsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.845713 4931 scope.go:117] "RemoveContainer" containerID="e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.846815 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f493e630-c604-4fd1-9fa6-f26d6d1a179a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f493e630-c604-4fd1-9fa6-f26d6d1a179a" (UID: "f493e630-c604-4fd1-9fa6-f26d6d1a179a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:46.853082 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e\": container with ID starting with e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e not found: ID does not exist" containerID="e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.853113 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e"} err="failed to get container status \"e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e\": rpc error: code = NotFound desc = could not find container \"e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e\": container with ID starting with e47aded0e7f33012a293c39a0b4c3359af98c56ec7e3320fc16509dd1760490e not found: ID does not exist" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.853134 4931 scope.go:117] "RemoveContainer" containerID="d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:46.855176 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2\": container with ID starting with d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2 not found: ID does not exist" containerID="d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.855226 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2"} err="failed to get container status \"d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2\": rpc error: code = NotFound desc = could not find container \"d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2\": container with ID starting with d6f739809f3c40dd44a526d132e36537732145204e408c6bc109beb8752471c2 not found: ID does not exist" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.855282 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f493e630-c604-4fd1-9fa6-f26d6d1a179a-kube-api-access-zc6vk" (OuterVolumeSpecName: "kube-api-access-zc6vk") pod "f493e630-c604-4fd1-9fa6-f26d6d1a179a" (UID: "f493e630-c604-4fd1-9fa6-f26d6d1a179a"). InnerVolumeSpecName "kube-api-access-zc6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.859974 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-kube-api-access-swhwl" (OuterVolumeSpecName: "kube-api-access-swhwl") pod "2b6b4ccf-805f-463c-b8c9-d975fd2a9059" (UID: "2b6b4ccf-805f-463c-b8c9-d975fd2a9059"). InnerVolumeSpecName "kube-api-access-swhwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.871306 4931 generic.go:334] "Generic (PLEG): container finished" podID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerID="2823dcc09d156bc746ffbc3ab196c3d6e136f453bf377837d7cce10861a168f4" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.871355 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e","Type":"ContainerDied","Data":"2823dcc09d156bc746ffbc3ab196c3d6e136f453bf377837d7cce10861a168f4"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.871379 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e","Type":"ContainerDied","Data":"d58c6ad814535983bdd3740a7cd3d8c344b8ddf68658a3e7d51e045ec46e07d7"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.871389 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58c6ad814535983bdd3740a7cd3d8c344b8ddf68658a3e7d51e045ec46e07d7" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.871492 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.872718 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8ee9-account-create-update-c7rsn" event={"ID":"5d4d7097-4e75-41cb-b451-6feb8e2184b9","Type":"ContainerDied","Data":"f0d507bce832298463e6a094cc7b0f7eb6c19d37e2a1f9f33913556dc5ffc1c1"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.872757 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8ee9-account-create-update-c7rsn" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.883176 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0120-account-create-update-cj262" event={"ID":"d13136a7-4633-4386-822d-ceb2cb3320b8","Type":"ContainerDied","Data":"94259a3980ca06eabd57f602644b7974c5802e08901f053b5b514caf5639d01b"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.883262 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0120-account-create-update-cj262" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.886972 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" event={"ID":"f493e630-c604-4fd1-9fa6-f26d6d1a179a","Type":"ContainerDied","Data":"34afa4a36598164bccdcad1293ddead8d7610abc3c9551b334f25c08a708b5f9"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.887066 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-10f6-account-create-update-ntbbc" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.902499 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xxt5" event={"ID":"623f3c8f-d741-4ba4-baca-905a13102f38","Type":"ContainerStarted","Data":"7543927ad63a3b9e73b08c0c9fa7a83b683adacd56d1d27f7d72109cd07d12dd"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.903780 4931 scope.go:117] "RemoveContainer" containerID="7543927ad63a3b9e73b08c0c9fa7a83b683adacd56d1d27f7d72109cd07d12dd" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:46.908140 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-6xxt5_openstack(623f3c8f-d741-4ba4-baca-905a13102f38)\"" pod="openstack/root-account-create-update-6xxt5" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.922003 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-326d-account-create-update-b25zb" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.922022 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-326d-account-create-update-b25zb" event={"ID":"2b6b4ccf-805f-463c-b8c9-d975fd2a9059","Type":"ContainerDied","Data":"c0d032c4bd8a6102c282961d37b4968dfb10aaf972fe8b42cd15f5070c0f0f3a"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.931733 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13136a7-4633-4386-822d-ceb2cb3320b8-operator-scripts\") pod \"d13136a7-4633-4386-822d-ceb2cb3320b8\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.931922 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6j5x\" (UniqueName: \"kubernetes.io/projected/d13136a7-4633-4386-822d-ceb2cb3320b8-kube-api-access-l6j5x\") pod \"d13136a7-4633-4386-822d-ceb2cb3320b8\" (UID: \"d13136a7-4633-4386-822d-ceb2cb3320b8\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.932116 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13136a7-4633-4386-822d-ceb2cb3320b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d13136a7-4633-4386-822d-ceb2cb3320b8" (UID: "d13136a7-4633-4386-822d-ceb2cb3320b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.932397 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvzsw\" (UniqueName: \"kubernetes.io/projected/2565fa42-f180-4948-8b2f-68c419d78d2b-kube-api-access-qvzsw\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.932409 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swhwl\" (UniqueName: \"kubernetes.io/projected/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-kube-api-access-swhwl\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.932429 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6b4ccf-805f-463c-b8c9-d975fd2a9059-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.932438 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f493e630-c604-4fd1-9fa6-f26d6d1a179a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.932447 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13136a7-4633-4386-822d-ceb2cb3320b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.932456 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc6vk\" (UniqueName: \"kubernetes.io/projected/f493e630-c604-4fd1-9fa6-f26d6d1a179a-kube-api-access-zc6vk\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.943447 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13136a7-4633-4386-822d-ceb2cb3320b8-kube-api-access-l6j5x" (OuterVolumeSpecName: "kube-api-access-l6j5x") pod "d13136a7-4633-4386-822d-ceb2cb3320b8" (UID: "d13136a7-4633-4386-822d-ceb2cb3320b8"). InnerVolumeSpecName "kube-api-access-l6j5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.956402 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-config-data" (OuterVolumeSpecName: "config-data") pod "2565fa42-f180-4948-8b2f-68c419d78d2b" (UID: "2565fa42-f180-4948-8b2f-68c419d78d2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.958259 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" event={"ID":"9ac55021-a07e-443f-9ee9-e7516556b975","Type":"ContainerStarted","Data":"9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.959771 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener-log" containerID="cri-o://1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.960302 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener" containerID="cri-o://9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.983667 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "2565fa42-f180-4948-8b2f-68c419d78d2b" (UID: "2565fa42-f180-4948-8b2f-68c419d78d2b"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.993802 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-df05-account-create-update-nrbm4" event={"ID":"7ef60747-e73b-451c-b8e1-6abd596d31bb","Type":"ContainerDied","Data":"5f23441d1b937de628ce230831d40e43097271e249db0186baccaa1a1137dc00"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:46.993857 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f23441d1b937de628ce230831d40e43097271e249db0186baccaa1a1137dc00" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.010948 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2565fa42-f180-4948-8b2f-68c419d78d2b" (UID: "2565fa42-f180-4948-8b2f-68c419d78d2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.017514 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.017959 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7789bbd757-45b5w" event={"ID":"c0c7aeee-9023-433a-83d0-aa0e9942a0ed","Type":"ContainerStarted","Data":"a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.018153 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7789bbd757-45b5w" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker-log" containerID="cri-o://eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.018219 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7789bbd757-45b5w" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker" containerID="cri-o://a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.035019 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.036450 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef60747-e73b-451c-b8e1-6abd596d31bb-operator-scripts\") pod \"7ef60747-e73b-451c-b8e1-6abd596d31bb\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.036645 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjv6q\" (UniqueName: \"kubernetes.io/projected/7ef60747-e73b-451c-b8e1-6abd596d31bb-kube-api-access-xjv6q\") pod \"7ef60747-e73b-451c-b8e1-6abd596d31bb\" (UID: \"7ef60747-e73b-451c-b8e1-6abd596d31bb\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.041481 4931 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.048569 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.048586 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6j5x\" (UniqueName: \"kubernetes.io/projected/d13136a7-4633-4386-822d-ceb2cb3320b8-kube-api-access-l6j5x\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.048598 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.044220 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" podStartSLOduration=6.044202739 podStartE2EDuration="6.044202739s" podCreationTimestamp="2026-01-30 05:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:29:46.984036348 +0000 UTC m=+1322.353946605" watchObservedRunningTime="2026-01-30 05:29:47.044202739 +0000 UTC m=+1322.414112996" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.042952 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef","Type":"ContainerDied","Data":"1c39e215a63df32503630d9061f8096755ed069b7c32eec93d18140193ca977b"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.042930 4931 generic.go:334] "Generic (PLEG): container finished" podID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerID="1c39e215a63df32503630d9061f8096755ed069b7c32eec93d18140193ca977b" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.043563 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef60747-e73b-451c-b8e1-6abd596d31bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ef60747-e73b-451c-b8e1-6abd596d31bb" (UID: "7ef60747-e73b-451c-b8e1-6abd596d31bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.046640 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef60747-e73b-451c-b8e1-6abd596d31bb-kube-api-access-xjv6q" (OuterVolumeSpecName: "kube-api-access-xjv6q") pod "7ef60747-e73b-451c-b8e1-6abd596d31bb" (UID: "7ef60747-e73b-451c-b8e1-6abd596d31bb"). InnerVolumeSpecName "kube-api-access-xjv6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.048747 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef","Type":"ContainerDied","Data":"651858dcd740868b54f1818387952f7e3dd92b06537502abf826f277b0f1c2f7"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.048767 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="651858dcd740868b54f1818387952f7e3dd92b06537502abf826f277b0f1c2f7" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.046771 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.076214 4931 generic.go:334] "Generic (PLEG): container finished" podID="9bb44c01-e79f-42d8-912c-66db07c6b328" containerID="9aac5eb9a7735bf1efc6d134170c4297691e32c19fa1a2cd01ab0ae918243436" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.076291 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9bb44c01-e79f-42d8-912c-66db07c6b328","Type":"ContainerDied","Data":"9aac5eb9a7735bf1efc6d134170c4297691e32c19fa1a2cd01ab0ae918243436"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.076410 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.081515 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "2565fa42-f180-4948-8b2f-68c419d78d2b" (UID: "2565fa42-f180-4948-8b2f-68c419d78d2b"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.098904 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.104288 4931 generic.go:334] "Generic (PLEG): container finished" podID="2565fa42-f180-4948-8b2f-68c419d78d2b" containerID="4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe" exitCode=0 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.104362 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-cbdc6b6c8-m9v7c" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.105748 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2565fa42-f180-4948-8b2f-68c419d78d2b","Type":"ContainerDied","Data":"4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.105781 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2565fa42-f180-4948-8b2f-68c419d78d2b","Type":"ContainerDied","Data":"182ca03d45434848993e7087501801fbb8335a526ad9960a6da96e395124bc68"} Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.105797 4931 scope.go:117] "RemoveContainer" containerID="4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.105880 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.106544 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.114876 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-ntbbc"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.130029 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.134590 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-10f6-account-create-update-ntbbc"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.138113 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.149962 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rplvd\" (UniqueName: \"kubernetes.io/projected/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-kube-api-access-rplvd\") pod \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150009 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150030 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-combined-ca-bundle\") pod \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150068 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-scripts\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150085 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-public-tls-certs\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150109 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data-custom\") pod \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150132 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-internal-tls-certs\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150150 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data-custom\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150200 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-config-data\") pod \"88988b92-cd64-490d-b55f-959ecf4095af\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150231 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-galera-tls-certs\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.150251 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-combined-ca-bundle\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.189846 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-etc-machine-id\") pod \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.189919 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-run-httpd\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.189941 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hln5j\" (UniqueName: \"kubernetes.io/projected/88988b92-cd64-490d-b55f-959ecf4095af-kube-api-access-hln5j\") pod \"88988b92-cd64-490d-b55f-959ecf4095af\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190005 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-config-data\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190032 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-combined-ca-bundle\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190073 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-combined-ca-bundle\") pod \"88988b92-cd64-490d-b55f-959ecf4095af\" (UID: \"88988b92-cd64-490d-b55f-959ecf4095af\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190107 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kolla-config\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190149 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-public-tls-certs\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190230 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsxq8\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-kube-api-access-dsxq8\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190251 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190281 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-etc-machine-id\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190322 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-operator-scripts\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190343 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data\") pod \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190379 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-logs\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190402 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190441 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-log-httpd\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190494 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts\") pod \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190614 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbw9z\" (UniqueName: \"kubernetes.io/projected/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kube-api-access-fbw9z\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190645 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zlj\" (UniqueName: \"kubernetes.io/projected/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-kube-api-access-64zlj\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190666 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-generated\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190685 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-internal-tls-certs\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190710 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-combined-ca-bundle\") pod \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\" (UID: \"c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.190726 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-default\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.191682 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef60747-e73b-451c-b8e1-6abd596d31bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.191700 4931 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2565fa42-f180-4948-8b2f-68c419d78d2b-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.191709 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjv6q\" (UniqueName: \"kubernetes.io/projected/7ef60747-e73b-451c-b8e1-6abd596d31bb-kube-api-access-xjv6q\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.193671 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.194056 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.195051 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" (UID: "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.217797 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.220081 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-scripts" (OuterVolumeSpecName: "scripts") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.222852 4931 scope.go:117] "RemoveContainer" containerID="4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.222966 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88988b92-cd64-490d-b55f-959ecf4095af-kube-api-access-hln5j" (OuterVolumeSpecName: "kube-api-access-hln5j") pod "88988b92-cd64-490d-b55f-959ecf4095af" (UID: "88988b92-cd64-490d-b55f-959ecf4095af"). InnerVolumeSpecName "kube-api-access-hln5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.229497 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.229529 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-logs" (OuterVolumeSpecName: "logs") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.233694 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" (UID: "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.235131 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.236146 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kube-api-access-fbw9z" (OuterVolumeSpecName: "kube-api-access-fbw9z") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "kube-api-access-fbw9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.237932 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.239453 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-kube-api-access-64zlj" (OuterVolumeSpecName: "kube-api-access-64zlj") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "kube-api-access-64zlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.239817 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.240112 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.240619 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe\": container with ID starting with 4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe not found: ID does not exist" containerID="4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.240655 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe"} err="failed to get container status \"4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe\": rpc error: code = NotFound desc = could not find container \"4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe\": container with ID starting with 4d787c63b2588decfd0b1e48116ec781fe2f24d2cd2bea253561aac466911bbe not found: ID does not exist" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.245211 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-kube-api-access-rplvd" (OuterVolumeSpecName: "kube-api-access-rplvd") pod "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" (UID: "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd"). InnerVolumeSpecName "kube-api-access-rplvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.258620 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-config-data" (OuterVolumeSpecName: "config-data") pod "88988b92-cd64-490d-b55f-959ecf4095af" (UID: "88988b92-cd64-490d-b55f-959ecf4095af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.264548 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-kube-api-access-dsxq8" (OuterVolumeSpecName: "kube-api-access-dsxq8") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "kube-api-access-dsxq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.267818 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-b25zb"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.300948 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.300966 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-326d-account-create-update-b25zb"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.301522 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts" (OuterVolumeSpecName: "scripts") pod "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" (UID: "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.322007 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329027 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67hmc\" (UniqueName: \"kubernetes.io/projected/9bb44c01-e79f-42d8-912c-66db07c6b328-kube-api-access-67hmc\") pod \"9bb44c01-e79f-42d8-912c-66db07c6b328\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329067 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\" (UID: \"7e9a7f86-7e9d-4062-9c50-72d0d82e24ef\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329116 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-config-data\") pod \"9bb44c01-e79f-42d8-912c-66db07c6b328\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " Jan 30 05:29:47 crc kubenswrapper[4931]: W0130 05:29:47.329164 4931 mount_helper_common.go:34] Warning: mount cleanup skipped because path does not exist: /var/lib/kubelet/pods/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef/volumes/kubernetes.io~local-volume/local-storage02-crc Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329198 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329295 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift\") pod \"98d21216-5a0f-422c-9642-0ea353a33e82\" (UID: \"98d21216-5a0f-422c-9642-0ea353a33e82\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329315 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-combined-ca-bundle\") pod \"9bb44c01-e79f-42d8-912c-66db07c6b328\" (UID: \"9bb44c01-e79f-42d8-912c-66db07c6b328\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329374 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts\") pod \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\" (UID: \"bb6268b1-9fae-42a1-9f9e-0bed9c69cadd\") " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329933 4931 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329945 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsxq8\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-kube-api-access-dsxq8\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329957 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329965 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329973 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329980 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329989 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbw9z\" (UniqueName: \"kubernetes.io/projected/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-kube-api-access-fbw9z\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.329997 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330005 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zlj\" (UniqueName: \"kubernetes.io/projected/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-kube-api-access-64zlj\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330013 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330022 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rplvd\" (UniqueName: \"kubernetes.io/projected/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-kube-api-access-rplvd\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330041 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330051 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330060 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330069 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330077 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330085 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330093 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98d21216-5a0f-422c-9642-0ea353a33e82-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.330101 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hln5j\" (UniqueName: \"kubernetes.io/projected/88988b92-cd64-490d-b55f-959ecf4095af-kube-api-access-hln5j\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: W0130 05:29:47.331790 4931 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/98d21216-5a0f-422c-9642-0ea353a33e82/volumes/kubernetes.io~projected/etc-swift Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.331847 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: W0130 05:29:47.332163 4931 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd/volumes/kubernetes.io~secret/scripts Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.332181 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts" (OuterVolumeSpecName: "scripts") pod "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" (UID: "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.348649 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8ee9-account-create-update-c7rsn"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.358192 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8ee9-account-create-update-c7rsn"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.360231 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb44c01-e79f-42d8-912c-66db07c6b328-kube-api-access-67hmc" (OuterVolumeSpecName: "kube-api-access-67hmc") pod "9bb44c01-e79f-42d8-912c-66db07c6b328" (UID: "9bb44c01-e79f-42d8-912c-66db07c6b328"). InnerVolumeSpecName "kube-api-access-67hmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.366014 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7789bbd757-45b5w" podStartSLOduration=6.360559058 podStartE2EDuration="6.360559058s" podCreationTimestamp="2026-01-30 05:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 05:29:47.234096151 +0000 UTC m=+1322.604006408" watchObservedRunningTime="2026-01-30 05:29:47.360559058 +0000 UTC m=+1322.730469315" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.372672 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.407813 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" (UID: "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.413641 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.432653 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.432925 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67hmc\" (UniqueName: \"kubernetes.io/projected/9bb44c01-e79f-42d8-912c-66db07c6b328-kube-api-access-67hmc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.432960 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.432972 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.432982 4931 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98d21216-5a0f-422c-9642-0ea353a33e82-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.463621 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-config-data" (OuterVolumeSpecName: "config-data") pod "9bb44c01-e79f-42d8-912c-66db07c6b328" (UID: "9bb44c01-e79f-42d8-912c-66db07c6b328"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.482508 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" path="/var/lib/kubelet/pods/1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d/volumes" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.483435 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6b4ccf-805f-463c-b8c9-d975fd2a9059" path="/var/lib/kubelet/pods/2b6b4ccf-805f-463c-b8c9-d975fd2a9059/volumes" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.483811 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba289fc-17e9-45e9-ac24-434d69045d97" path="/var/lib/kubelet/pods/4ba289fc-17e9-45e9-ac24-434d69045d97/volumes" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.484457 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4d7097-4e75-41cb-b451-6feb8e2184b9" path="/var/lib/kubelet/pods/5d4d7097-4e75-41cb-b451-6feb8e2184b9/volumes" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.484917 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" path="/var/lib/kubelet/pods/f28f211b-be26-4f15-92a1-36b91cb53bbb/volumes" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.485910 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f493e630-c604-4fd1-9fa6-f26d6d1a179a" path="/var/lib/kubelet/pods/f493e630-c604-4fd1-9fa6-f26d6d1a179a/volumes" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.521563 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-config-data" (OuterVolumeSpecName: "config-data") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.523972 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.526210 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.535955 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.535983 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.536016 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.536025 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.585781 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.590227 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.595826 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data" (OuterVolumeSpecName: "config-data") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.637714 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.637740 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.637750 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.678227 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88988b92-cd64-490d-b55f-959ecf4095af" (UID: "88988b92-cd64-490d-b55f-959ecf4095af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.697027 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.731891 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" (UID: "7e9a7f86-7e9d-4062-9c50-72d0d82e24ef"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.750227 4931 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.750260 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88988b92-cd64-490d-b55f-959ecf4095af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.750268 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.832576 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bb44c01-e79f-42d8-912c-66db07c6b328" (UID: "9bb44c01-e79f-42d8-912c-66db07c6b328"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.837689 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" (UID: "c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.849405 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98d21216-5a0f-422c-9642-0ea353a33e82" (UID: "98d21216-5a0f-422c-9642-0ea353a33e82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.852163 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.852237 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d21216-5a0f-422c-9642-0ea353a33e82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.852287 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb44c01-e79f-42d8-912c-66db07c6b328-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865779 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865820 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865833 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865847 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-cbdc6b6c8-m9v7c"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865856 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-cbdc6b6c8-m9v7c"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865890 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0120-account-create-update-cj262"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865902 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0120-account-create-update-cj262"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865914 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865929 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865941 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-595b-account-create-update-hcchn"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865954 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-595b-account-create-update-hcchn"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.865970 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-595b-account-create-update-jk6fx"] Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866314 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88988b92-cd64-490d-b55f-959ecf4095af" containerName="nova-cell0-conductor-conductor" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866324 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="88988b92-cd64-490d-b55f-959ecf4095af" containerName="nova-cell0-conductor-conductor" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866338 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api-log" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866344 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api-log" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866359 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba289fc-17e9-45e9-ac24-434d69045d97" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866365 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba289fc-17e9-45e9-ac24-434d69045d97" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866376 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866383 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866396 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866402 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866432 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerName="init" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866439 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerName="init" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866448 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2565fa42-f180-4948-8b2f-68c419d78d2b" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866455 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2565fa42-f180-4948-8b2f-68c419d78d2b" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866465 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866472 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866495 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="ovsdbserver-nb" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866502 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="ovsdbserver-nb" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866512 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerName="dnsmasq-dns" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866519 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerName="dnsmasq-dns" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866531 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerName="mysql-bootstrap" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866538 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerName="mysql-bootstrap" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866549 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-httpd" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866555 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-httpd" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866565 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-server" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866571 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-server" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866579 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb44c01-e79f-42d8-912c-66db07c6b328" containerName="nova-cell1-conductor-conductor" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866585 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb44c01-e79f-42d8-912c-66db07c6b328" containerName="nova-cell1-conductor-conductor" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866595 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="probe" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866601 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="probe" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866611 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="cinder-scheduler" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866618 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="cinder-scheduler" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866631 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="ovsdbserver-sb" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866638 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="ovsdbserver-sb" Jan 30 05:29:47 crc kubenswrapper[4931]: E0130 05:29:47.866653 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerName="galera" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866659 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerName="galera" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866810 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-server" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866820 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" containerName="proxy-httpd" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866829 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2565fa42-f180-4948-8b2f-68c419d78d2b" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866841 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="cinder-scheduler" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866848 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="88988b92-cd64-490d-b55f-959ecf4095af" containerName="nova-cell0-conductor-conductor" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866857 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866864 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866870 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c29d9d9-1679-4a4f-82da-9a2b0ae32a6d" containerName="dnsmasq-dns" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866876 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba289fc-17e9-45e9-ac24-434d69045d97" containerName="openstack-network-exporter" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866888 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" containerName="ovsdbserver-nb" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866899 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866911 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28f211b-be26-4f15-92a1-36b91cb53bbb" containerName="ovsdbserver-sb" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866920 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" containerName="cinder-api-log" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866930 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" containerName="galera" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866937 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb44c01-e79f-42d8-912c-66db07c6b328" containerName="nova-cell1-conductor-conductor" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.866946 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" containerName="probe" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.867479 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-595b-account-create-update-jk6fx"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.867495 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.867746 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" containerName="memcached" containerID="cri-o://d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.869643 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.870907 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="30f9b591-fea6-4010-99db-45eef2237cdc" containerName="kube-state-metrics" containerID="cri-o://100081f00d3d095ca7d8dca6b7343ac8590f3de539067c314527dbcd86ceca1c" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.872089 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-central-agent" containerID="cri-o://9186f065f875b33ba8c3817977c815fff7b67dc7404ddb06ea6a85fb64800755" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.872212 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="proxy-httpd" containerID="cri-o://25cc12087ab98d0fc79e679c4de5be61f557329f293a5a68393ba8b20a57c428" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.872245 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="sg-core" containerID="cri-o://0f6848e1ccd25c33da13cfce62f451555f794b35623ff124d320281a39cb9911" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.872275 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-notification-agent" containerID="cri-o://62da5f526098b3b9f5437a81119156f87878963a1c44c026236d9b63e20bbac5" gracePeriod=30 Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.879035 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.931774 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:46564->10.217.0.207:8775: read: connection reset by peer" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.934326 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:46568->10.217.0.207:8775: read: connection reset by peer" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.944178 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d69b6c966-npv8t" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:43470->10.217.0.166:9311: read: connection reset by peer" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.944292 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d69b6c966-npv8t" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:43464->10.217.0.166:9311: read: connection reset by peer" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.953589 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4gqzx"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.954453 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrsv\" (UniqueName: \"kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.954533 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.954621 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9tkc\" (UniqueName: \"kubernetes.io/projected/98fff7bd-db4c-462f-8f2c-34733f4e81ad-kube-api-access-t9tkc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.954632 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fff7bd-db4c-462f-8f2c-34733f4e81ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.961272 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-sdn7d"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.962090 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="30f9b591-fea6-4010-99db-45eef2237cdc" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.176:8080/livez\": EOF" Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.971580 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4gqzx"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.977677 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-sdn7d"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.982456 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-97bdbd495-2prdt"] Jan 30 05:29:47 crc kubenswrapper[4931]: I0130 05:29:47.982666 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-97bdbd495-2prdt" podUID="2d6e5156-6e75-4dff-a322-b3d43e596c7e" containerName="keystone-api" containerID="cri-o://2031f531f783ff9fda1aa19098c42b8b6619a54760d8a1056a1788a1c38b669b" gracePeriod=30 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:47.996828 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.004490 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6xxt5"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.016486 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data" (OuterVolumeSpecName: "config-data") pod "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" (UID: "bb6268b1-9fae-42a1-9f9e-0bed9c69cadd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.048183 4931 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: , extraDiskErr: could not stat "/var/log/pods/openstack_openstack-cell1-galera-0_7e9a7f86-7e9d-4062-9c50-72d0d82e24ef/galera/0.log" to get inode usage: stat /var/log/pods/openstack_openstack-cell1-galera-0_7e9a7f86-7e9d-4062-9c50-72d0d82e24ef/galera/0.log: no such file or directory Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.056609 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.056751 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrsv\" (UniqueName: \"kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.056845 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.057219 4931 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.057267 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts podName:f0ad84e9-a4cc-40a3-850c-f7757aad5b5d nodeName:}" failed. No retries permitted until 2026-01-30 05:29:48.557254062 +0000 UTC m=+1323.927164319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts") pod "keystone-595b-account-create-update-jk6fx" (UID: "f0ad84e9-a4cc-40a3-850c-f7757aad5b5d") : configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.060908 4931 projected.go:194] Error preparing data for projected volume kube-api-access-9qrsv for pod openstack/keystone-595b-account-create-update-jk6fx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.060983 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv podName:f0ad84e9-a4cc-40a3-850c-f7757aad5b5d nodeName:}" failed. No retries permitted until 2026-01-30 05:29:48.560956827 +0000 UTC m=+1323.930867084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9qrsv" (UniqueName: "kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv") pod "keystone-595b-account-create-update-jk6fx" (UID: "f0ad84e9-a4cc-40a3-850c-f7757aad5b5d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.127193 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9bbdw"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.129197 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9bbdw"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.143590 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-595b-account-create-update-jk6fx"] Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.144150 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-9qrsv operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-595b-account-create-update-jk6fx" podUID="f0ad84e9-a4cc-40a3-850c-f7757aad5b5d" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.171665 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.172157 4931 generic.go:334] "Generic (PLEG): container finished" podID="623f3c8f-d741-4ba4-baca-905a13102f38" containerID="7543927ad63a3b9e73b08c0c9fa7a83b683adacd56d1d27f7d72109cd07d12dd" exitCode=1 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.172210 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xxt5" event={"ID":"623f3c8f-d741-4ba4-baca-905a13102f38","Type":"ContainerDied","Data":"7543927ad63a3b9e73b08c0c9fa7a83b683adacd56d1d27f7d72109cd07d12dd"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.172241 4931 scope.go:117] "RemoveContainer" containerID="3a989cbeb7f4bf86d12831f4d3313ed2a342cf72d86cb0362de32fea4fb7324a" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.172616 4931 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-6xxt5" secret="" err="secret \"galera-openstack-dockercfg-ms6mr\" not found" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.172651 4931 scope.go:117] "RemoveContainer" containerID="7543927ad63a3b9e73b08c0c9fa7a83b683adacd56d1d27f7d72109cd07d12dd" Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.172911 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-6xxt5_openstack(623f3c8f-d741-4ba4-baca-905a13102f38)\"" pod="openstack/root-account-create-update-6xxt5" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.198484 4931 generic.go:334] "Generic (PLEG): container finished" podID="30f9b591-fea6-4010-99db-45eef2237cdc" containerID="100081f00d3d095ca7d8dca6b7343ac8590f3de539067c314527dbcd86ceca1c" exitCode=2 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.198741 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30f9b591-fea6-4010-99db-45eef2237cdc","Type":"ContainerDied","Data":"100081f00d3d095ca7d8dca6b7343ac8590f3de539067c314527dbcd86ceca1c"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.219721 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9bb44c01-e79f-42d8-912c-66db07c6b328","Type":"ContainerDied","Data":"1c009cefddaacdb91464295ae716d32cc8f92887e0517a9011cacb821ea578bd"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.219823 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.234596 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.206:3000/\": dial tcp 10.217.0.206:3000: connect: connection refused" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.234721 4931 generic.go:334] "Generic (PLEG): container finished" podID="58928fea-709c-44d8-bd12-23937da8e2c4" containerID="44392d9ac535d9a3ce2ca47aa88e680823c3197a2e50d537aa67df4b03e52fd1" exitCode=0 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.234804 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d69b6c966-npv8t" event={"ID":"58928fea-709c-44d8-bd12-23937da8e2c4","Type":"ContainerDied","Data":"44392d9ac535d9a3ce2ca47aa88e680823c3197a2e50d537aa67df4b03e52fd1"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.239767 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.241629 4931 generic.go:334] "Generic (PLEG): container finished" podID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerID="d70e15b0f074e59f1a9f39048c2cf45a62e81400091cb70df139030d514fe003" exitCode=0 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.241671 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"406c25f3-c398-4ace-ba4b-1d9b48b289a2","Type":"ContainerDied","Data":"d70e15b0f074e59f1a9f39048c2cf45a62e81400091cb70df139030d514fe003"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.254349 4931 scope.go:117] "RemoveContainer" containerID="9aac5eb9a7735bf1efc6d134170c4297691e32c19fa1a2cd01ab0ae918243436" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.264140 4931 generic.go:334] "Generic (PLEG): container finished" podID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerID="6c90254ae67ae50ab19fa555ce55d1839d94322882a09bb91ad616b62efcfeeb" exitCode=0 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.264198 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74","Type":"ContainerDied","Data":"6c90254ae67ae50ab19fa555ce55d1839d94322882a09bb91ad616b62efcfeeb"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.269730 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c9gv\" (UniqueName: \"kubernetes.io/projected/46ad7de9-e01d-414c-8a4d-9073ad986186-kube-api-access-5c9gv\") pod \"46ad7de9-e01d-414c-8a4d-9073ad986186\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.269939 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ad7de9-e01d-414c-8a4d-9073ad986186-operator-scripts\") pod \"46ad7de9-e01d-414c-8a4d-9073ad986186\" (UID: \"46ad7de9-e01d-414c-8a4d-9073ad986186\") " Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.270788 4931 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.270828 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts podName:623f3c8f-d741-4ba4-baca-905a13102f38 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:48.770814263 +0000 UTC m=+1324.140724520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts") pod "root-account-create-update-6xxt5" (UID: "623f3c8f-d741-4ba4-baca-905a13102f38") : configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.271023 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ad7de9-e01d-414c-8a4d-9073ad986186-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46ad7de9-e01d-414c-8a4d-9073ad986186" (UID: "46ad7de9-e01d-414c-8a4d-9073ad986186"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.271263 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-cb5c-account-create-update-n52qj" event={"ID":"46ad7de9-e01d-414c-8a4d-9073ad986186","Type":"ContainerDied","Data":"d1886781d10ecb06348f9abb9caccf69a1bc5841a942465cb5e9e6aa23d451dd"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.271330 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-cb5c-account-create-update-n52qj" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.271576 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.286617 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ad7de9-e01d-414c-8a4d-9073ad986186-kube-api-access-5c9gv" (OuterVolumeSpecName: "kube-api-access-5c9gv") pod "46ad7de9-e01d-414c-8a4d-9073ad986186" (UID: "46ad7de9-e01d-414c-8a4d-9073ad986186"). InnerVolumeSpecName "kube-api-access-5c9gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.292171 4931 generic.go:334] "Generic (PLEG): container finished" podID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerID="25cc12087ab98d0fc79e679c4de5be61f557329f293a5a68393ba8b20a57c428" exitCode=0 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.292206 4931 generic.go:334] "Generic (PLEG): container finished" podID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerID="0f6848e1ccd25c33da13cfce62f451555f794b35623ff124d320281a39cb9911" exitCode=2 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.292254 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerDied","Data":"25cc12087ab98d0fc79e679c4de5be61f557329f293a5a68393ba8b20a57c428"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.292279 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerDied","Data":"0f6848e1ccd25c33da13cfce62f451555f794b35623ff124d320281a39cb9911"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.299006 4931 generic.go:334] "Generic (PLEG): container finished" podID="7c0ddaec-4521-4898-8649-262b52f24acb" containerID="3509d69982e816f9732671b256bbf363b32c9a199362011499a8607bf3a6e808" exitCode=0 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.299070 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7c0ddaec-4521-4898-8649-262b52f24acb","Type":"ContainerDied","Data":"3509d69982e816f9732671b256bbf363b32c9a199362011499a8607bf3a6e808"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.299215 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.307141 4931 generic.go:334] "Generic (PLEG): container finished" podID="3415cfc4-a71a-4110-bf82-295181bb386f" containerID="cc448c5e4a9d4def969b75156b3cc39bbccbe47f49a05ef9d15592b4643a809f" exitCode=0 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.307275 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3415cfc4-a71a-4110-bf82-295181bb386f","Type":"ContainerDied","Data":"cc448c5e4a9d4def969b75156b3cc39bbccbe47f49a05ef9d15592b4643a809f"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.311820 4931 generic.go:334] "Generic (PLEG): container finished" podID="9ac55021-a07e-443f-9ee9-e7516556b975" containerID="1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d" exitCode=143 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.311897 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" event={"ID":"9ac55021-a07e-443f-9ee9-e7516556b975","Type":"ContainerDied","Data":"1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.315262 4931 generic.go:334] "Generic (PLEG): container finished" podID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerID="eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123" exitCode=143 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.315331 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7789bbd757-45b5w" event={"ID":"c0c7aeee-9023-433a-83d0-aa0e9942a0ed","Type":"ContainerDied","Data":"eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.321661 4931 generic.go:334] "Generic (PLEG): container finished" podID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerID="e8519c60ec437acc9c9b5934ab3951ad5ad349186eda26fc85c2bae9b3010203" exitCode=0 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.321726 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-798b7dc5fb-xl2zq" event={"ID":"ebe4f743-9a60-428f-8b58-14ba160d9fd7","Type":"ContainerDied","Data":"e8519c60ec437acc9c9b5934ab3951ad5ad349186eda26fc85c2bae9b3010203"} Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.321801 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-798b7dc5fb-xl2zq" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.328737 4931 scope.go:117] "RemoveContainer" containerID="3509d69982e816f9732671b256bbf363b32c9a199362011499a8607bf3a6e808" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.328873 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.328945 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-76fb878d5c-s22sw" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.329003 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-df05-account-create-update-nrbm4" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.329061 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.329086 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.329170 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.342958 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.359952 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.368183 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371703 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-public-tls-certs\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371757 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-logs\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371798 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-scripts\") pod \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371830 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-config-data\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371889 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-combined-ca-bundle\") pod \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371933 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-internal-tls-certs\") pod \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371954 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.371982 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-public-tls-certs\") pod \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372006 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68vcv\" (UniqueName: \"kubernetes.io/projected/ebe4f743-9a60-428f-8b58-14ba160d9fd7-kube-api-access-68vcv\") pod \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372031 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpkl\" (UniqueName: \"kubernetes.io/projected/7c0ddaec-4521-4898-8649-262b52f24acb-kube-api-access-pmpkl\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372062 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-config-data\") pod \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372082 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-httpd-run\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372116 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-combined-ca-bundle\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372146 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-scripts\") pod \"7c0ddaec-4521-4898-8649-262b52f24acb\" (UID: \"7c0ddaec-4521-4898-8649-262b52f24acb\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372175 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebe4f743-9a60-428f-8b58-14ba160d9fd7-logs\") pod \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\" (UID: \"ebe4f743-9a60-428f-8b58-14ba160d9fd7\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.372628 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ad7de9-e01d-414c-8a4d-9073ad986186-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.373034 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c9gv\" (UniqueName: \"kubernetes.io/projected/46ad7de9-e01d-414c-8a4d-9073ad986186-kube-api-access-5c9gv\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.373087 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-logs" (OuterVolumeSpecName: "logs") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.373645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebe4f743-9a60-428f-8b58-14ba160d9fd7-logs" (OuterVolumeSpecName: "logs") pod "ebe4f743-9a60-428f-8b58-14ba160d9fd7" (UID: "ebe4f743-9a60-428f-8b58-14ba160d9fd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.373914 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.376983 4931 scope.go:117] "RemoveContainer" containerID="754804bc268dc311547eddc996a035b132392b26798898d2ba034bc32dc1ee16" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.377267 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0ddaec-4521-4898-8649-262b52f24acb-kube-api-access-pmpkl" (OuterVolumeSpecName: "kube-api-access-pmpkl") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "kube-api-access-pmpkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.377597 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-scripts" (OuterVolumeSpecName: "scripts") pod "ebe4f743-9a60-428f-8b58-14ba160d9fd7" (UID: "ebe4f743-9a60-428f-8b58-14ba160d9fd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.380019 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-scripts" (OuterVolumeSpecName: "scripts") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.393886 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.394734 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe4f743-9a60-428f-8b58-14ba160d9fd7-kube-api-access-68vcv" (OuterVolumeSpecName: "kube-api-access-68vcv") pod "ebe4f743-9a60-428f-8b58-14ba160d9fd7" (UID: "ebe4f743-9a60-428f-8b58-14ba160d9fd7"). InnerVolumeSpecName "kube-api-access-68vcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.417481 4931 scope.go:117] "RemoveContainer" containerID="e8519c60ec437acc9c9b5934ab3951ad5ad349186eda26fc85c2bae9b3010203" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.430037 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-76fb878d5c-s22sw"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.440337 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-76fb878d5c-s22sw"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.446290 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.447556 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerName="galera" containerID="cri-o://2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257" gracePeriod=30 Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.477941 4931 scope.go:117] "RemoveContainer" containerID="1d733edb3ceaca43f34355e23bbaaced9e55a731057ead7b89c96398337d6e11" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.479507 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.481010 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-combined-ca-bundle\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.481898 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.481944 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-config-data\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.481977 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-internal-tls-certs\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482015 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnhps\" (UniqueName: \"kubernetes.io/projected/3415cfc4-a71a-4110-bf82-295181bb386f-kube-api-access-nnhps\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482125 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-httpd-run\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482166 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-scripts\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482217 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-logs\") pod \"3415cfc4-a71a-4110-bf82-295181bb386f\" (UID: \"3415cfc4-a71a-4110-bf82-295181bb386f\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482664 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482680 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebe4f743-9a60-428f-8b58-14ba160d9fd7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482689 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482701 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482708 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482726 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482735 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68vcv\" (UniqueName: \"kubernetes.io/projected/ebe4f743-9a60-428f-8b58-14ba160d9fd7-kube-api-access-68vcv\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482744 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmpkl\" (UniqueName: \"kubernetes.io/projected/7c0ddaec-4521-4898-8649-262b52f24acb-kube-api-access-pmpkl\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.482753 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7c0ddaec-4521-4898-8649-262b52f24acb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.485004 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.485586 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-logs" (OuterVolumeSpecName: "logs") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.486612 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.494021 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3415cfc4-a71a-4110-bf82-295181bb386f-kube-api-access-nnhps" (OuterVolumeSpecName: "kube-api-access-nnhps") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "kube-api-access-nnhps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.498444 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.503954 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.506264 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.507864 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-scripts" (OuterVolumeSpecName: "scripts") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.518325 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.522794 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.524053 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebe4f743-9a60-428f-8b58-14ba160d9fd7" (UID: "ebe4f743-9a60-428f-8b58-14ba160d9fd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.527274 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-config-data" (OuterVolumeSpecName: "config-data") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.550562 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-df05-account-create-update-nrbm4"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.559387 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.559793 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c0ddaec-4521-4898-8649-262b52f24acb" (UID: "7c0ddaec-4521-4898-8649-262b52f24acb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.563067 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-df05-account-create-update-nrbm4"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.568919 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.574785 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.583756 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrsv\" (UniqueName: \"kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.583909 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.584138 4931 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.584304 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts podName:f0ad84e9-a4cc-40a3-850c-f7757aad5b5d nodeName:}" failed. No retries permitted until 2026-01-30 05:29:49.584288389 +0000 UTC m=+1324.954198646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts") pod "keystone-595b-account-create-update-jk6fx" (UID: "f0ad84e9-a4cc-40a3-850c-f7757aad5b5d") : configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584392 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584479 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnhps\" (UniqueName: \"kubernetes.io/projected/3415cfc4-a71a-4110-bf82-295181bb386f-kube-api-access-nnhps\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584533 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584581 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584628 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584676 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584730 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584778 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3415cfc4-a71a-4110-bf82-295181bb386f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.584929 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0ddaec-4521-4898-8649-262b52f24acb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.601127 4931 projected.go:194] Error preparing data for projected volume kube-api-access-9qrsv for pod openstack/keystone-595b-account-create-update-jk6fx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.601201 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv podName:f0ad84e9-a4cc-40a3-850c-f7757aad5b5d nodeName:}" failed. No retries permitted until 2026-01-30 05:29:49.601182937 +0000 UTC m=+1324.971093194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9qrsv" (UniqueName: "kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv") pod "keystone-595b-account-create-update-jk6fx" (UID: "f0ad84e9-a4cc-40a3-850c-f7757aad5b5d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.606289 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.609889 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.612938 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.637883 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ebe4f743-9a60-428f-8b58-14ba160d9fd7" (UID: "ebe4f743-9a60-428f-8b58-14ba160d9fd7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.637916 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-config-data" (OuterVolumeSpecName: "config-data") pod "3415cfc4-a71a-4110-bf82-295181bb386f" (UID: "3415cfc4-a71a-4110-bf82-295181bb386f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.637931 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-config-data" (OuterVolumeSpecName: "config-data") pod "ebe4f743-9a60-428f-8b58-14ba160d9fd7" (UID: "ebe4f743-9a60-428f-8b58-14ba160d9fd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.664318 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ebe4f743-9a60-428f-8b58-14ba160d9fd7" (UID: "ebe4f743-9a60-428f-8b58-14ba160d9fd7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.686515 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.686547 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.686560 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.686570 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.686578 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.686586 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe4f743-9a60-428f-8b58-14ba160d9fd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.686594 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3415cfc4-a71a-4110-bf82-295181bb386f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.733082 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.742156 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.765291 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-cb5c-account-create-update-n52qj"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.772694 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.780063 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.784477 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-cb5c-account-create-update-n52qj"] Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.788353 4931 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.788403 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts podName:623f3c8f-d741-4ba4-baca-905a13102f38 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:49.788390642 +0000 UTC m=+1325.158300899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts") pod "root-account-create-update-6xxt5" (UID: "623f3c8f-d741-4ba4-baca-905a13102f38") : configmap "openstack-scripts" not found Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.825454 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.850220 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.888996 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-nova-metadata-tls-certs\") pod \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889037 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-public-tls-certs\") pod \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889067 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dkxm\" (UniqueName: \"kubernetes.io/projected/406c25f3-c398-4ace-ba4b-1d9b48b289a2-kube-api-access-8dkxm\") pod \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889088 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data\") pod \"58928fea-709c-44d8-bd12-23937da8e2c4\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889103 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znwrn\" (UniqueName: \"kubernetes.io/projected/30f9b591-fea6-4010-99db-45eef2237cdc-kube-api-access-znwrn\") pod \"30f9b591-fea6-4010-99db-45eef2237cdc\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889128 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-combined-ca-bundle\") pod \"58928fea-709c-44d8-bd12-23937da8e2c4\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889154 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-internal-tls-certs\") pod \"58928fea-709c-44d8-bd12-23937da8e2c4\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889184 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data-custom\") pod \"58928fea-709c-44d8-bd12-23937da8e2c4\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-certs\") pod \"30f9b591-fea6-4010-99db-45eef2237cdc\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889227 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-internal-tls-certs\") pod \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889253 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-public-tls-certs\") pod \"58928fea-709c-44d8-bd12-23937da8e2c4\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889274 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x99zt\" (UniqueName: \"kubernetes.io/projected/58928fea-709c-44d8-bd12-23937da8e2c4-kube-api-access-x99zt\") pod \"58928fea-709c-44d8-bd12-23937da8e2c4\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889299 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-combined-ca-bundle\") pod \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889322 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-logs\") pod \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889340 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-config-data\") pod \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889355 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-combined-ca-bundle\") pod \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889373 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-config-data\") pod \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889392 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-config\") pod \"30f9b591-fea6-4010-99db-45eef2237cdc\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889471 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58928fea-709c-44d8-bd12-23937da8e2c4-logs\") pod \"58928fea-709c-44d8-bd12-23937da8e2c4\" (UID: \"58928fea-709c-44d8-bd12-23937da8e2c4\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889513 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c25f3-c398-4ace-ba4b-1d9b48b289a2-logs\") pod \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\" (UID: \"406c25f3-c398-4ace-ba4b-1d9b48b289a2\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889550 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc8vz\" (UniqueName: \"kubernetes.io/projected/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-kube-api-access-pc8vz\") pod \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\" (UID: \"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.889572 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-combined-ca-bundle\") pod \"30f9b591-fea6-4010-99db-45eef2237cdc\" (UID: \"30f9b591-fea6-4010-99db-45eef2237cdc\") " Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.894594 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58928fea-709c-44d8-bd12-23937da8e2c4-logs" (OuterVolumeSpecName: "logs") pod "58928fea-709c-44d8-bd12-23937da8e2c4" (UID: "58928fea-709c-44d8-bd12-23937da8e2c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.895693 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58928fea-709c-44d8-bd12-23937da8e2c4-kube-api-access-x99zt" (OuterVolumeSpecName: "kube-api-access-x99zt") pod "58928fea-709c-44d8-bd12-23937da8e2c4" (UID: "58928fea-709c-44d8-bd12-23937da8e2c4"). InnerVolumeSpecName "kube-api-access-x99zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.896578 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f9b591-fea6-4010-99db-45eef2237cdc-kube-api-access-znwrn" (OuterVolumeSpecName: "kube-api-access-znwrn") pod "30f9b591-fea6-4010-99db-45eef2237cdc" (UID: "30f9b591-fea6-4010-99db-45eef2237cdc"). InnerVolumeSpecName "kube-api-access-znwrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.896898 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-logs" (OuterVolumeSpecName: "logs") pod "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" (UID: "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.907126 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406c25f3-c398-4ace-ba4b-1d9b48b289a2-logs" (OuterVolumeSpecName: "logs") pod "406c25f3-c398-4ace-ba4b-1d9b48b289a2" (UID: "406c25f3-c398-4ace-ba4b-1d9b48b289a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.914625 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406c25f3-c398-4ace-ba4b-1d9b48b289a2-kube-api-access-8dkxm" (OuterVolumeSpecName: "kube-api-access-8dkxm") pod "406c25f3-c398-4ace-ba4b-1d9b48b289a2" (UID: "406c25f3-c398-4ace-ba4b-1d9b48b289a2"). InnerVolumeSpecName "kube-api-access-8dkxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.919341 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "58928fea-709c-44d8-bd12-23937da8e2c4" (UID: "58928fea-709c-44d8-bd12-23937da8e2c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.924773 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f9b591-fea6-4010-99db-45eef2237cdc" (UID: "30f9b591-fea6-4010-99db-45eef2237cdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.948216 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-kube-api-access-pc8vz" (OuterVolumeSpecName: "kube-api-access-pc8vz") pod "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" (UID: "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74"). InnerVolumeSpecName "kube-api-access-pc8vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.977987 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "30f9b591-fea6-4010-99db-45eef2237cdc" (UID: "30f9b591-fea6-4010-99db-45eef2237cdc"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.980482 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-798b7dc5fb-xl2zq"] Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.986977 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "406c25f3-c398-4ace-ba4b-1d9b48b289a2" (UID: "406c25f3-c398-4ace-ba4b-1d9b48b289a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.991275 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991294 4931 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: E0130 05:29:48.991332 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data podName:fc3f4796-66b1-452b-afca-5e62cbf2a53b nodeName:}" failed. No retries permitted until 2026-01-30 05:29:56.991314311 +0000 UTC m=+1332.361224568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data") pod "rabbitmq-server-0" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b") : configmap "rabbitmq-config-data" not found Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991357 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58928fea-709c-44d8-bd12-23937da8e2c4-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991370 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406c25f3-c398-4ace-ba4b-1d9b48b289a2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991382 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc8vz\" (UniqueName: \"kubernetes.io/projected/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-kube-api-access-pc8vz\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991397 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991406 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dkxm\" (UniqueName: \"kubernetes.io/projected/406c25f3-c398-4ace-ba4b-1d9b48b289a2-kube-api-access-8dkxm\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991416 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znwrn\" (UniqueName: \"kubernetes.io/projected/30f9b591-fea6-4010-99db-45eef2237cdc-kube-api-access-znwrn\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991437 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991447 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x99zt\" (UniqueName: \"kubernetes.io/projected/58928fea-709c-44d8-bd12-23937da8e2c4-kube-api-access-x99zt\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991455 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:48 crc kubenswrapper[4931]: I0130 05:29:48.991463 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.000887 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-config-data" (OuterVolumeSpecName: "config-data") pod "406c25f3-c398-4ace-ba4b-1d9b48b289a2" (UID: "406c25f3-c398-4ace-ba4b-1d9b48b289a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.003302 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-798b7dc5fb-xl2zq"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.025002 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-config-data" (OuterVolumeSpecName: "config-data") pod "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" (UID: "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.032550 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58928fea-709c-44d8-bd12-23937da8e2c4" (UID: "58928fea-709c-44d8-bd12-23937da8e2c4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.032697 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "406c25f3-c398-4ace-ba4b-1d9b48b289a2" (UID: "406c25f3-c398-4ace-ba4b-1d9b48b289a2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.035856 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data" (OuterVolumeSpecName: "config-data") pod "58928fea-709c-44d8-bd12-23937da8e2c4" (UID: "58928fea-709c-44d8-bd12-23937da8e2c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.047681 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58928fea-709c-44d8-bd12-23937da8e2c4" (UID: "58928fea-709c-44d8-bd12-23937da8e2c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.051669 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "30f9b591-fea6-4010-99db-45eef2237cdc" (UID: "30f9b591-fea6-4010-99db-45eef2237cdc"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.051794 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" (UID: "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.053179 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" (UID: "e4e6d6a8-599b-4ab9-b1f7-cf521e455d74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.073986 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "406c25f3-c398-4ace-ba4b-1d9b48b289a2" (UID: "406c25f3-c398-4ace-ba4b-1d9b48b289a2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.078994 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "58928fea-709c-44d8-bd12-23937da8e2c4" (UID: "58928fea-709c-44d8-bd12-23937da8e2c4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093372 4931 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093400 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093411 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093422 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093441 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093450 4931 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f9b591-fea6-4010-99db-45eef2237cdc-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093459 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093467 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58928fea-709c-44d8-bd12-23937da8e2c4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093475 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406c25f3-c398-4ace-ba4b-1d9b48b289a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093483 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.093491 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.212950 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.277258 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.278808 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.279800 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.279835 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1acfa9c2-a802-404e-976b-93d9f99e1fbb" containerName="nova-scheduler-scheduler" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.297086 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-config-data\") pod \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.297172 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l65cq\" (UniqueName: \"kubernetes.io/projected/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kube-api-access-l65cq\") pod \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.297205 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-memcached-tls-certs\") pod \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.297259 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kolla-config\") pod \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.297321 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-combined-ca-bundle\") pod \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\" (UID: \"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba\") " Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.298071 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" (UID: "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.298169 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-config-data" (OuterVolumeSpecName: "config-data") pod "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" (UID: "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.298610 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.298629 4931 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.302562 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kube-api-access-l65cq" (OuterVolumeSpecName: "kube-api-access-l65cq") pod "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" (UID: "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba"). InnerVolumeSpecName "kube-api-access-l65cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.331943 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" (UID: "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.350181 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.350395 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.351407 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.351465 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.352720 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3415cfc4-a71a-4110-bf82-295181bb386f","Type":"ContainerDied","Data":"8686488d53f891915ba13840ec460659816d6140e0778cc81ec5034b3206cf0a"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.352760 4931 scope.go:117] "RemoveContainer" containerID="cc448c5e4a9d4def969b75156b3cc39bbccbe47f49a05ef9d15592b4643a809f" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.352930 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.360934 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" (UID: "3bc265a8-34e2-4ec9-bdd5-69d75ea14bba"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.363400 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.365060 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.368338 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.368388 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.373865 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e4e6d6a8-599b-4ab9-b1f7-cf521e455d74","Type":"ContainerDied","Data":"575ed258be47595d56c46486254ee62d83c72e80bf57019828419331f46802a7"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.374093 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.382997 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d69b6c966-npv8t" event={"ID":"58928fea-709c-44d8-bd12-23937da8e2c4","Type":"ContainerDied","Data":"80b7562d2e28920f91efc6005a3dada9547915b81544850dc4480d2be479f27a"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.383190 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d69b6c966-npv8t" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.405111 4931 generic.go:334] "Generic (PLEG): container finished" podID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerID="9186f065f875b33ba8c3817977c815fff7b67dc7404ddb06ea6a85fb64800755" exitCode=0 Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.405358 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerDied","Data":"9186f065f875b33ba8c3817977c815fff7b67dc7404ddb06ea6a85fb64800755"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.407334 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.407944 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l65cq\" (UniqueName: \"kubernetes.io/projected/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-kube-api-access-l65cq\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.408013 4931 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.409374 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.413816 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"406c25f3-c398-4ace-ba4b-1d9b48b289a2","Type":"ContainerDied","Data":"76659dd7c99e1a96db2d103669e2a9a9122278f8d46deab3719b00840b99f159"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.413973 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.423707 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.429316 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.439128 4931 generic.go:334] "Generic (PLEG): container finished" podID="3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" containerID="d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a" exitCode=0 Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.439352 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.439624 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.455972 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.456198 4931 scope.go:117] "RemoveContainer" containerID="3795752efe01b170e153d47107ec186f09220104cdec5c71d0a210a63580f858" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.459681 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d338366-1ff1-4c95-aa94-30ba5c813138" path="/var/lib/kubelet/pods/0d338366-1ff1-4c95-aa94-30ba5c813138/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.460350 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2400d2d7-1da5-4a38-a558-c970226f95b9" path="/var/lib/kubelet/pods/2400d2d7-1da5-4a38-a558-c970226f95b9/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.460901 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2565fa42-f180-4948-8b2f-68c419d78d2b" path="/var/lib/kubelet/pods/2565fa42-f180-4948-8b2f-68c419d78d2b/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.465670 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" path="/var/lib/kubelet/pods/3415cfc4-a71a-4110-bf82-295181bb386f/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.466859 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ad7de9-e01d-414c-8a4d-9073ad986186" path="/var/lib/kubelet/pods/46ad7de9-e01d-414c-8a4d-9073ad986186/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.467545 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a63fb4-24bc-4834-b6e7-937688c5de09" path="/var/lib/kubelet/pods/49a63fb4-24bc-4834-b6e7-937688c5de09/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.468969 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" path="/var/lib/kubelet/pods/7c0ddaec-4521-4898-8649-262b52f24acb/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.469856 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9a7f86-7e9d-4062-9c50-72d0d82e24ef" path="/var/lib/kubelet/pods/7e9a7f86-7e9d-4062-9c50-72d0d82e24ef/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.470991 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef60747-e73b-451c-b8e1-6abd596d31bb" path="/var/lib/kubelet/pods/7ef60747-e73b-451c-b8e1-6abd596d31bb/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.471867 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88988b92-cd64-490d-b55f-959ecf4095af" path="/var/lib/kubelet/pods/88988b92-cd64-490d-b55f-959ecf4095af/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.472455 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d21216-5a0f-422c-9642-0ea353a33e82" path="/var/lib/kubelet/pods/98d21216-5a0f-422c-9642-0ea353a33e82/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.473199 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fff7bd-db4c-462f-8f2c-34733f4e81ad" path="/var/lib/kubelet/pods/98fff7bd-db4c-462f-8f2c-34733f4e81ad/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.473623 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb44c01-e79f-42d8-912c-66db07c6b328" path="/var/lib/kubelet/pods/9bb44c01-e79f-42d8-912c-66db07c6b328/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.474561 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d923658-472c-4565-bae3-5eb1e329a92c" path="/var/lib/kubelet/pods/9d923658-472c-4565-bae3-5eb1e329a92c/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.475060 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6268b1-9fae-42a1-9f9e-0bed9c69cadd" path="/var/lib/kubelet/pods/bb6268b1-9fae-42a1-9f9e-0bed9c69cadd/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.475756 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5722020-7619-4a17-8990-e025402e2c3a" path="/var/lib/kubelet/pods/c5722020-7619-4a17-8990-e025402e2c3a/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.476967 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e" path="/var/lib/kubelet/pods/c86b96a8-cd5c-4ea7-8a6f-5b3a4b2d923e/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.477581 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13136a7-4633-4386-822d-ceb2cb3320b8" path="/var/lib/kubelet/pods/d13136a7-4633-4386-822d-ceb2cb3320b8/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.477943 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" path="/var/lib/kubelet/pods/ebe4f743-9a60-428f-8b58-14ba160d9fd7/volumes" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.478969 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.478995 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"30f9b591-fea6-4010-99db-45eef2237cdc","Type":"ContainerDied","Data":"3ab5021fa2dee4a0cbf054b6b79552974b77b39e6c35cbc24e07bc801848b48b"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.479016 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.479032 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba","Type":"ContainerDied","Data":"d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.479079 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bc265a8-34e2-4ec9-bdd5-69d75ea14bba","Type":"ContainerDied","Data":"04861bcc57b9390c9ad1874bbf632a1a5e0da259d664ad8c22e1c2db45c343a6"} Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.479093 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d69b6c966-npv8t"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.479105 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7d69b6c966-npv8t"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.497739 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.525346 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.533507 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.541956 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.584198 4931 scope.go:117] "RemoveContainer" containerID="6c90254ae67ae50ab19fa555ce55d1839d94322882a09bb91ad616b62efcfeeb" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.590264 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.606395 4931 scope.go:117] "RemoveContainer" containerID="a268ff4ead170d4fc7c25a89e846ed2d0f10278b94da4082529cc4ebd9ab4f0e" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.615568 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.618156 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrsv\" (UniqueName: \"kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.618244 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts\") pod \"keystone-595b-account-create-update-jk6fx\" (UID: \"f0ad84e9-a4cc-40a3-850c-f7757aad5b5d\") " pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.618407 4931 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.618471 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts podName:f0ad84e9-a4cc-40a3-850c-f7757aad5b5d nodeName:}" failed. No retries permitted until 2026-01-30 05:29:51.61845634 +0000 UTC m=+1326.988366597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts") pod "keystone-595b-account-create-update-jk6fx" (UID: "f0ad84e9-a4cc-40a3-850c-f7757aad5b5d") : configmap "openstack-scripts" not found Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.621012 4931 projected.go:194] Error preparing data for projected volume kube-api-access-9qrsv for pod openstack/keystone-595b-account-create-update-jk6fx: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.621213 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv podName:f0ad84e9-a4cc-40a3-850c-f7757aad5b5d nodeName:}" failed. No retries permitted until 2026-01-30 05:29:51.621049263 +0000 UTC m=+1326.990959520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9qrsv" (UniqueName: "kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv") pod "keystone-595b-account-create-update-jk6fx" (UID: "f0ad84e9-a4cc-40a3-850c-f7757aad5b5d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.625619 4931 scope.go:117] "RemoveContainer" containerID="44392d9ac535d9a3ce2ca47aa88e680823c3197a2e50d537aa67df4b03e52fd1" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.651888 4931 scope.go:117] "RemoveContainer" containerID="0e0a199cc977b5213010336cc2b6c461a3916b61b6c3d9f6dc8eecc7d5c8d17e" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.686760 4931 scope.go:117] "RemoveContainer" containerID="d70e15b0f074e59f1a9f39048c2cf45a62e81400091cb70df139030d514fe003" Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.727413 4931 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.727600 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data podName:081e3873-ea99-4486-925f-784a98e49405 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:57.727579126 +0000 UTC m=+1333.097489463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data") pod "rabbitmq-cell1-server-0" (UID: "081e3873-ea99-4486-925f-784a98e49405") : configmap "rabbitmq-cell1-config-data" not found Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.742675 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.759619 4931 scope.go:117] "RemoveContainer" containerID="e5dadd497214a5d2efc5b8027947f3661f7f73599b0778570358c42329955e8d" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.798084 4931 scope.go:117] "RemoveContainer" containerID="100081f00d3d095ca7d8dca6b7343ac8590f3de539067c314527dbcd86ceca1c" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.825256 4931 scope.go:117] "RemoveContainer" containerID="d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a" Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.829035 4931 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.829115 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts podName:623f3c8f-d741-4ba4-baca-905a13102f38 nodeName:}" failed. No retries permitted until 2026-01-30 05:29:51.829100648 +0000 UTC m=+1327.199010905 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts") pod "root-account-create-update-6xxt5" (UID: "623f3c8f-d741-4ba4-baca-905a13102f38") : configmap "openstack-scripts" not found Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.851552 4931 scope.go:117] "RemoveContainer" containerID="d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a" Jan 30 05:29:49 crc kubenswrapper[4931]: E0130 05:29:49.853186 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a\": container with ID starting with d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a not found: ID does not exist" containerID="d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.853221 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a"} err="failed to get container status \"d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a\": rpc error: code = NotFound desc = could not find container \"d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a\": container with ID starting with d18c2a61b748bbaa1434cb9b18af9aeefe6182613328656e2c5f81364416b28a not found: ID does not exist" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.930690 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7wh\" (UniqueName: \"kubernetes.io/projected/623f3c8f-d741-4ba4-baca-905a13102f38-kube-api-access-pq7wh\") pod \"623f3c8f-d741-4ba4-baca-905a13102f38\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.930818 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts\") pod \"623f3c8f-d741-4ba4-baca-905a13102f38\" (UID: \"623f3c8f-d741-4ba4-baca-905a13102f38\") " Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.931310 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "623f3c8f-d741-4ba4-baca-905a13102f38" (UID: "623f3c8f-d741-4ba4-baca-905a13102f38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.931809 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/623f3c8f-d741-4ba4-baca-905a13102f38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:49 crc kubenswrapper[4931]: I0130 05:29:49.941878 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623f3c8f-d741-4ba4-baca-905a13102f38-kube-api-access-pq7wh" (OuterVolumeSpecName: "kube-api-access-pq7wh") pod "623f3c8f-d741-4ba4-baca-905a13102f38" (UID: "623f3c8f-d741-4ba4-baca-905a13102f38"). InnerVolumeSpecName "kube-api-access-pq7wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.034325 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7wh\" (UniqueName: \"kubernetes.io/projected/623f3c8f-d741-4ba4-baca-905a13102f38-kube-api-access-pq7wh\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.107973 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ggjtl" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.140216 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ggjtl" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" probeResult="failure" output=< Jan 30 05:29:50 crc kubenswrapper[4931]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 30 05:29:50 crc kubenswrapper[4931]: > Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.333229 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 05:29:50 crc kubenswrapper[4931]: E0130 05:29:50.411768 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949 is running failed: container process not found" containerID="cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 05:29:50 crc kubenswrapper[4931]: E0130 05:29:50.412175 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949 is running failed: container process not found" containerID="cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 05:29:50 crc kubenswrapper[4931]: E0130 05:29:50.412360 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949 is running failed: container process not found" containerID="cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 30 05:29:50 crc kubenswrapper[4931]: E0130 05:29:50.412392 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="ovn-northd" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.430692 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="081e3873-ea99-4486-925f-784a98e49405" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.442811 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-default\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.442909 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5fq4\" (UniqueName: \"kubernetes.io/projected/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kube-api-access-t5fq4\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.442968 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-galera-tls-certs\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.443034 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kolla-config\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.443125 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-operator-scripts\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.443168 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.443202 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-generated\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.443232 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-combined-ca-bundle\") pod \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\" (UID: \"348ffd7a-9b7f-40aa-ada9-145a3a783d09\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.443842 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.443718 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.444306 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.444468 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.457590 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_586d7a1d-7b2a-45ac-aacb-b77e95bf3d91/ovn-northd/0.log" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.457678 4931 generic.go:334] "Generic (PLEG): container finished" podID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerID="cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949" exitCode=139 Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.457789 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91","Type":"ContainerDied","Data":"cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949"} Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.457820 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91","Type":"ContainerDied","Data":"14085ad5b1fb30e2e472a98f1ef3cb304ab6fa42857d4a5f5e235f581937f71b"} Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.457832 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14085ad5b1fb30e2e472a98f1ef3cb304ab6fa42857d4a5f5e235f581937f71b" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.460037 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.463467 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.463460 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fc3f4796-66b1-452b-afca-5e62cbf2a53b","Type":"ContainerDied","Data":"1c82fc5914a60be58942659c6c59b346ba961ba1c401d7f0c82d22447fc0b135"} Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.464081 4931 scope.go:117] "RemoveContainer" containerID="1c82fc5914a60be58942659c6c59b346ba961ba1c401d7f0c82d22447fc0b135" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.463308 4931 generic.go:334] "Generic (PLEG): container finished" podID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerID="1c82fc5914a60be58942659c6c59b346ba961ba1c401d7f0c82d22447fc0b135" exitCode=0 Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.467404 4931 generic.go:334] "Generic (PLEG): container finished" podID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerID="2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257" exitCode=0 Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.467502 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"348ffd7a-9b7f-40aa-ada9-145a3a783d09","Type":"ContainerDied","Data":"2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257"} Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.467530 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"348ffd7a-9b7f-40aa-ada9-145a3a783d09","Type":"ContainerDied","Data":"b84c7628d09612f5f198418a62d4a2daabe598f826560ac2746867af05368a8f"} Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.467658 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.470146 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.482538 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_586d7a1d-7b2a-45ac-aacb-b77e95bf3d91/ovn-northd/0.log" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.482645 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.482933 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-595b-account-create-update-jk6fx" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.483953 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6xxt5" event={"ID":"623f3c8f-d741-4ba4-baca-905a13102f38","Type":"ContainerDied","Data":"b0a95f43d76f1ec6c04259c654f8d4d57485ed8bd7f8f8efedcad5a660e7a5b0"} Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.484267 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6xxt5" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.499571 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kube-api-access-t5fq4" (OuterVolumeSpecName: "kube-api-access-t5fq4") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "kube-api-access-t5fq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.545883 4931 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.545932 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.545958 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.545970 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.551645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.552479 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/348ffd7a-9b7f-40aa-ada9-145a3a783d09-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.552502 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5fq4\" (UniqueName: \"kubernetes.io/projected/348ffd7a-9b7f-40aa-ada9-145a3a783d09-kube-api-access-t5fq4\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.563336 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.571788 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "348ffd7a-9b7f-40aa-ada9-145a3a783d09" (UID: "348ffd7a-9b7f-40aa-ada9-145a3a783d09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.604878 4931 scope.go:117] "RemoveContainer" containerID="8bdcfbd624616a917de046867a4b176539e978b80dd1b9fad737dfdab9cb1bce" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.648861 4931 scope.go:117] "RemoveContainer" containerID="2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.649062 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-595b-account-create-update-jk6fx"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.652961 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-plugins-conf\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653010 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-northd-tls-certs\") pod \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653043 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-plugins\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653064 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc3f4796-66b1-452b-afca-5e62cbf2a53b-erlang-cookie-secret\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653092 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653108 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-metrics-certs-tls-certs\") pod \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653133 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-scripts\") pod \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653189 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb59q\" (UniqueName: \"kubernetes.io/projected/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-kube-api-access-qb59q\") pod \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653221 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653241 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-combined-ca-bundle\") pod \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653266 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc3f4796-66b1-452b-afca-5e62cbf2a53b-pod-info\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653282 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-confd\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653302 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6xv5\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-kube-api-access-d6xv5\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653319 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-config\") pod \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653346 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-server-conf\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653374 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-erlang-cookie\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653388 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-rundir\") pod \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\" (UID: \"586d7a1d-7b2a-45ac-aacb-b77e95bf3d91\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653402 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-tls\") pod \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\" (UID: \"fc3f4796-66b1-452b-afca-5e62cbf2a53b\") " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.653537 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.655114 4931 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.655138 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.655151 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.655166 4931 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/348ffd7a-9b7f-40aa-ada9-145a3a783d09-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.655190 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-595b-account-create-update-jk6fx"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.655924 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-scripts" (OuterVolumeSpecName: "scripts") pod "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" (UID: "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.662689 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.662741 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-config" (OuterVolumeSpecName: "config") pod "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" (UID: "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.662975 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" (UID: "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.663219 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.665021 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.666387 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.666481 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-kube-api-access-d6xv5" (OuterVolumeSpecName: "kube-api-access-d6xv5") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "kube-api-access-d6xv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.668538 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fc3f4796-66b1-452b-afca-5e62cbf2a53b-pod-info" (OuterVolumeSpecName: "pod-info") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.672582 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-kube-api-access-qb59q" (OuterVolumeSpecName: "kube-api-access-qb59q") pod "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" (UID: "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"). InnerVolumeSpecName "kube-api-access-qb59q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.678089 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6xxt5"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.678799 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data" (OuterVolumeSpecName: "config-data") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.679568 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc3f4796-66b1-452b-afca-5e62cbf2a53b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.681867 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" (UID: "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.683579 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6xxt5"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.707385 4931 scope.go:117] "RemoveContainer" containerID="8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.711820 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-server-conf" (OuterVolumeSpecName: "server-conf") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.727837 4931 scope.go:117] "RemoveContainer" containerID="2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257" Jan 30 05:29:50 crc kubenswrapper[4931]: E0130 05:29:50.728138 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257\": container with ID starting with 2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257 not found: ID does not exist" containerID="2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.728201 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257"} err="failed to get container status \"2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257\": rpc error: code = NotFound desc = could not find container \"2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257\": container with ID starting with 2be8fb84347f40a619544d41d9e655f731841a9706fc18fb88a00804b3fe0257 not found: ID does not exist" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.728224 4931 scope.go:117] "RemoveContainer" containerID="8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07" Jan 30 05:29:50 crc kubenswrapper[4931]: E0130 05:29:50.728492 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07\": container with ID starting with 8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07 not found: ID does not exist" containerID="8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.728533 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07"} err="failed to get container status \"8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07\": rpc error: code = NotFound desc = could not find container \"8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07\": container with ID starting with 8c5e06a4cbe821677c0328bfab275af633aec8c57e858d92fdfeb861d12c3e07 not found: ID does not exist" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.728600 4931 scope.go:117] "RemoveContainer" containerID="7543927ad63a3b9e73b08c0c9fa7a83b683adacd56d1d27f7d72109cd07d12dd" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.731639 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" (UID: "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.736650 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" (UID: "586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.749055 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fc3f4796-66b1-452b-afca-5e62cbf2a53b" (UID: "fc3f4796-66b1-452b-afca-5e62cbf2a53b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756619 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756643 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756652 4931 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fc3f4796-66b1-452b-afca-5e62cbf2a53b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756697 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756714 4931 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756725 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756733 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756765 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb59q\" (UniqueName: \"kubernetes.io/projected/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-kube-api-access-qb59q\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756774 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qrsv\" (UniqueName: \"kubernetes.io/projected/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d-kube-api-access-9qrsv\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756783 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756792 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756799 4931 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fc3f4796-66b1-452b-afca-5e62cbf2a53b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756807 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756816 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6xv5\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-kube-api-access-d6xv5\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756842 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756850 4931 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fc3f4796-66b1-452b-afca-5e62cbf2a53b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756859 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756868 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.756876 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fc3f4796-66b1-452b-afca-5e62cbf2a53b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.788084 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.799129 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.806005 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.823372 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.829272 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 05:29:50 crc kubenswrapper[4931]: I0130 05:29:50.858024 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.225451 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-97bdbd495-2prdt" podUID="2d6e5156-6e75-4dff-a322-b3d43e596c7e" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.155:5000/v3\": read tcp 10.217.0.2:50830->10.217.0.155:5000: read: connection reset by peer" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.432239 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f9b591-fea6-4010-99db-45eef2237cdc" path="/var/lib/kubelet/pods/30f9b591-fea6-4010-99db-45eef2237cdc/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.433172 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" path="/var/lib/kubelet/pods/348ffd7a-9b7f-40aa-ada9-145a3a783d09/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.433935 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" path="/var/lib/kubelet/pods/3bc265a8-34e2-4ec9-bdd5-69d75ea14bba/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.435105 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" path="/var/lib/kubelet/pods/406c25f3-c398-4ace-ba4b-1d9b48b289a2/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.435794 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" path="/var/lib/kubelet/pods/58928fea-709c-44d8-bd12-23937da8e2c4/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.436340 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" path="/var/lib/kubelet/pods/623f3c8f-d741-4ba4-baca-905a13102f38/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.437586 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" path="/var/lib/kubelet/pods/e4e6d6a8-599b-4ab9-b1f7-cf521e455d74/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.438034 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0ad84e9-a4cc-40a3-850c-f7757aad5b5d" path="/var/lib/kubelet/pods/f0ad84e9-a4cc-40a3-850c-f7757aad5b5d/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.438618 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" path="/var/lib/kubelet/pods/fc3f4796-66b1-452b-afca-5e62cbf2a53b/volumes" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.501118 4931 generic.go:334] "Generic (PLEG): container finished" podID="2d6e5156-6e75-4dff-a322-b3d43e596c7e" containerID="2031f531f783ff9fda1aa19098c42b8b6619a54760d8a1056a1788a1c38b669b" exitCode=0 Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.501543 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97bdbd495-2prdt" event={"ID":"2d6e5156-6e75-4dff-a322-b3d43e596c7e","Type":"ContainerDied","Data":"2031f531f783ff9fda1aa19098c42b8b6619a54760d8a1056a1788a1c38b669b"} Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.504976 4931 generic.go:334] "Generic (PLEG): container finished" podID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerID="62da5f526098b3b9f5437a81119156f87878963a1c44c026236d9b63e20bbac5" exitCode=0 Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.505013 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerDied","Data":"62da5f526098b3b9f5437a81119156f87878963a1c44c026236d9b63e20bbac5"} Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.505028 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cacfcbd5-8c12-4fc5-88ce-516fda23464d","Type":"ContainerDied","Data":"99a1153c1cd92ab2a34d0651a54dd16cc1116a03a4d5c96b1f4e7e5abbde1e2d"} Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.505038 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a1153c1cd92ab2a34d0651a54dd16cc1116a03a4d5c96b1f4e7e5abbde1e2d" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.506409 4931 generic.go:334] "Generic (PLEG): container finished" podID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerID="2c58ff417f0ff91cddebc47f633febd5ac50ac3ddd97dcc2fce28574c94ac8a6" exitCode=0 Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.506468 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c996f77-c9rqm" event={"ID":"7729e2d8-6c8c-4759-9e5d-535ad1586f47","Type":"ContainerDied","Data":"2c58ff417f0ff91cddebc47f633febd5ac50ac3ddd97dcc2fce28574c94ac8a6"} Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.508911 4931 generic.go:334] "Generic (PLEG): container finished" podID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerID="1bd0c14353cbfd196f658cae7f7167624a1cc818a0ca23ec5151f1c871a22e65" exitCode=0 Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.508950 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" event={"ID":"728a2e60-915e-4447-9465-aa64f7f5c7bb","Type":"ContainerDied","Data":"1bd0c14353cbfd196f658cae7f7167624a1cc818a0ca23ec5151f1c871a22e65"} Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.510471 4931 generic.go:334] "Generic (PLEG): container finished" podID="081e3873-ea99-4486-925f-784a98e49405" containerID="1bb7e19530d33f5a4cf134ca5c6644743c868cc750cac0c1bb313f0f47240dd8" exitCode=0 Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.510528 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"081e3873-ea99-4486-925f-784a98e49405","Type":"ContainerDied","Data":"1bb7e19530d33f5a4cf134ca5c6644743c868cc750cac0c1bb313f0f47240dd8"} Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.512374 4931 generic.go:334] "Generic (PLEG): container finished" podID="1acfa9c2-a802-404e-976b-93d9f99e1fbb" containerID="cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52" exitCode=0 Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.512482 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.514187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1acfa9c2-a802-404e-976b-93d9f99e1fbb","Type":"ContainerDied","Data":"cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52"} Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.545519 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.553833 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.559106 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 05:29:51 crc kubenswrapper[4931]: E0130 05:29:51.572889 4931 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 30 05:29:51 crc kubenswrapper[4931]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T05:29:44Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 05:29:51 crc kubenswrapper[4931]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 30 05:29:51 crc kubenswrapper[4931]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-ggjtl" message=< Jan 30 05:29:51 crc kubenswrapper[4931]: Exiting ovn-controller (1) [FAILED] Jan 30 05:29:51 crc kubenswrapper[4931]: Killing ovn-controller (1) [ OK ] Jan 30 05:29:51 crc kubenswrapper[4931]: 2026-01-30T05:29:44Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 05:29:51 crc kubenswrapper[4931]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 30 05:29:51 crc kubenswrapper[4931]: > Jan 30 05:29:51 crc kubenswrapper[4931]: E0130 05:29:51.572937 4931 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 30 05:29:51 crc kubenswrapper[4931]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-30T05:29:44Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 30 05:29:51 crc kubenswrapper[4931]: /etc/init.d/functions: line 589: 393 Alarm clock "$@" Jan 30 05:29:51 crc kubenswrapper[4931]: > pod="openstack/ovn-controller-ggjtl" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" containerID="cri-o://324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.572998 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ggjtl" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" containerID="cri-o://324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9" gracePeriod=22 Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.677741 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-run-httpd\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.677841 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4df4\" (UniqueName: \"kubernetes.io/projected/cacfcbd5-8c12-4fc5-88ce-516fda23464d-kube-api-access-q4df4\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.677899 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-combined-ca-bundle\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.677924 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-scripts\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.677943 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-ceilometer-tls-certs\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.678011 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-config-data\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.678039 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-sg-core-conf-yaml\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.678071 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-log-httpd\") pod \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\" (UID: \"cacfcbd5-8c12-4fc5-88ce-516fda23464d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.678797 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.679049 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.684962 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-scripts" (OuterVolumeSpecName: "scripts") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.685017 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacfcbd5-8c12-4fc5-88ce-516fda23464d-kube-api-access-q4df4" (OuterVolumeSpecName: "kube-api-access-q4df4") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "kube-api-access-q4df4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.728968 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.749578 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.751653 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.769518 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.779905 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.781275 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4df4\" (UniqueName: \"kubernetes.io/projected/cacfcbd5-8c12-4fc5-88ce-516fda23464d-kube-api-access-q4df4\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.781301 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.781310 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.781320 4931 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.781328 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.781336 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.781344 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cacfcbd5-8c12-4fc5-88ce-516fda23464d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.786641 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.790645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-config-data" (OuterVolumeSpecName: "config-data") pod "cacfcbd5-8c12-4fc5-88ce-516fda23464d" (UID: "cacfcbd5-8c12-4fc5-88ce-516fda23464d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.798133 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.830502 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.882999 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data-custom\") pod \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883096 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r29mt\" (UniqueName: \"kubernetes.io/projected/728a2e60-915e-4447-9465-aa64f7f5c7bb-kube-api-access-r29mt\") pod \"728a2e60-915e-4447-9465-aa64f7f5c7bb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883122 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data\") pod \"728a2e60-915e-4447-9465-aa64f7f5c7bb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883154 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data-custom\") pod \"728a2e60-915e-4447-9465-aa64f7f5c7bb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883214 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-combined-ca-bundle\") pod \"728a2e60-915e-4447-9465-aa64f7f5c7bb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883291 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7729e2d8-6c8c-4759-9e5d-535ad1586f47-logs\") pod \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883332 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpv7q\" (UniqueName: \"kubernetes.io/projected/7729e2d8-6c8c-4759-9e5d-535ad1586f47-kube-api-access-vpv7q\") pod \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883386 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-combined-ca-bundle\") pod \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883407 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data\") pod \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\" (UID: \"7729e2d8-6c8c-4759-9e5d-535ad1586f47\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883457 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2e60-915e-4447-9465-aa64f7f5c7bb-logs\") pod \"728a2e60-915e-4447-9465-aa64f7f5c7bb\" (UID: \"728a2e60-915e-4447-9465-aa64f7f5c7bb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883838 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacfcbd5-8c12-4fc5-88ce-516fda23464d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.883851 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7729e2d8-6c8c-4759-9e5d-535ad1586f47-logs" (OuterVolumeSpecName: "logs") pod "7729e2d8-6c8c-4759-9e5d-535ad1586f47" (UID: "7729e2d8-6c8c-4759-9e5d-535ad1586f47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.884337 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728a2e60-915e-4447-9465-aa64f7f5c7bb-logs" (OuterVolumeSpecName: "logs") pod "728a2e60-915e-4447-9465-aa64f7f5c7bb" (UID: "728a2e60-915e-4447-9465-aa64f7f5c7bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.887386 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728a2e60-915e-4447-9465-aa64f7f5c7bb-kube-api-access-r29mt" (OuterVolumeSpecName: "kube-api-access-r29mt") pod "728a2e60-915e-4447-9465-aa64f7f5c7bb" (UID: "728a2e60-915e-4447-9465-aa64f7f5c7bb"). InnerVolumeSpecName "kube-api-access-r29mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.888338 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7729e2d8-6c8c-4759-9e5d-535ad1586f47" (UID: "7729e2d8-6c8c-4759-9e5d-535ad1586f47"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.893596 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7729e2d8-6c8c-4759-9e5d-535ad1586f47-kube-api-access-vpv7q" (OuterVolumeSpecName: "kube-api-access-vpv7q") pod "7729e2d8-6c8c-4759-9e5d-535ad1586f47" (UID: "7729e2d8-6c8c-4759-9e5d-535ad1586f47"). InnerVolumeSpecName "kube-api-access-vpv7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.895506 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "728a2e60-915e-4447-9465-aa64f7f5c7bb" (UID: "728a2e60-915e-4447-9465-aa64f7f5c7bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.907340 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7729e2d8-6c8c-4759-9e5d-535ad1586f47" (UID: "7729e2d8-6c8c-4759-9e5d-535ad1586f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.924595 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "728a2e60-915e-4447-9465-aa64f7f5c7bb" (UID: "728a2e60-915e-4447-9465-aa64f7f5c7bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.931673 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data" (OuterVolumeSpecName: "config-data") pod "7729e2d8-6c8c-4759-9e5d-535ad1586f47" (UID: "7729e2d8-6c8c-4759-9e5d-535ad1586f47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.933610 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ggjtl_8a337463-8b7e-496b-9a01-fc491120c21d/ovn-controller/0.log" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.933697 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.942999 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data" (OuterVolumeSpecName: "config-data") pod "728a2e60-915e-4447-9465-aa64f7f5c7bb" (UID: "728a2e60-915e-4447-9465-aa64f7f5c7bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.985325 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.985538 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-internal-tls-certs\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.985667 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrm76\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-kube-api-access-rrm76\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.985741 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-config-data\") pod \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.985820 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-server-conf\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.985894 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/081e3873-ea99-4486-925f-784a98e49405-erlang-cookie-secret\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.985969 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/081e3873-ea99-4486-925f-784a98e49405-pod-info\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986054 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-ovn-controller-tls-certs\") pod \"8a337463-8b7e-496b-9a01-fc491120c21d\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986148 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-config-data\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-plugins-conf\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986269 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx75c\" (UniqueName: \"kubernetes.io/projected/2d6e5156-6e75-4dff-a322-b3d43e596c7e-kube-api-access-hx75c\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986328 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-scripts\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986392 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-fernet-keys\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986481 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-public-tls-certs\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986556 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run\") pod \"8a337463-8b7e-496b-9a01-fc491120c21d\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986632 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-erlang-cookie\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986702 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-tls\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986770 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5r9p\" (UniqueName: \"kubernetes.io/projected/1acfa9c2-a802-404e-976b-93d9f99e1fbb-kube-api-access-z5r9p\") pod \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986834 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a337463-8b7e-496b-9a01-fc491120c21d-scripts\") pod \"8a337463-8b7e-496b-9a01-fc491120c21d\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986893 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-combined-ca-bundle\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.986982 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-combined-ca-bundle\") pod \"8a337463-8b7e-496b-9a01-fc491120c21d\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.987058 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7snh\" (UniqueName: \"kubernetes.io/projected/8a337463-8b7e-496b-9a01-fc491120c21d-kube-api-access-l7snh\") pod \"8a337463-8b7e-496b-9a01-fc491120c21d\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.987145 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-credential-keys\") pod \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\" (UID: \"2d6e5156-6e75-4dff-a322-b3d43e596c7e\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.987608 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-plugins\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.987709 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-confd\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.987775 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-combined-ca-bundle\") pod \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\" (UID: \"1acfa9c2-a802-404e-976b-93d9f99e1fbb\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.987839 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-log-ovn\") pod \"8a337463-8b7e-496b-9a01-fc491120c21d\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.987913 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"081e3873-ea99-4486-925f-784a98e49405\" (UID: \"081e3873-ea99-4486-925f-784a98e49405\") " Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988302 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r29mt\" (UniqueName: \"kubernetes.io/projected/728a2e60-915e-4447-9465-aa64f7f5c7bb-kube-api-access-r29mt\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988365 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988459 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988544 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/728a2e60-915e-4447-9465-aa64f7f5c7bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988612 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7729e2d8-6c8c-4759-9e5d-535ad1586f47-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988742 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpv7q\" (UniqueName: \"kubernetes.io/projected/7729e2d8-6c8c-4759-9e5d-535ad1586f47-kube-api-access-vpv7q\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988815 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988867 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988921 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/728a2e60-915e-4447-9465-aa64f7f5c7bb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.988973 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7729e2d8-6c8c-4759-9e5d-535ad1586f47-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.989779 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.990967 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.993142 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6e5156-6e75-4dff-a322-b3d43e596c7e-kube-api-access-hx75c" (OuterVolumeSpecName: "kube-api-access-hx75c") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "kube-api-access-hx75c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.993267 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8a337463-8b7e-496b-9a01-fc491120c21d" (UID: "8a337463-8b7e-496b-9a01-fc491120c21d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.994623 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run" (OuterVolumeSpecName: "var-run") pod "8a337463-8b7e-496b-9a01-fc491120c21d" (UID: "8a337463-8b7e-496b-9a01-fc491120c21d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.994908 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/081e3873-ea99-4486-925f-784a98e49405-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.995903 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.996053 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a337463-8b7e-496b-9a01-fc491120c21d-scripts" (OuterVolumeSpecName: "scripts") pod "8a337463-8b7e-496b-9a01-fc491120c21d" (UID: "8a337463-8b7e-496b-9a01-fc491120c21d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.998248 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.998275 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a337463-8b7e-496b-9a01-fc491120c21d-kube-api-access-l7snh" (OuterVolumeSpecName: "kube-api-access-l7snh") pod "8a337463-8b7e-496b-9a01-fc491120c21d" (UID: "8a337463-8b7e-496b-9a01-fc491120c21d"). InnerVolumeSpecName "kube-api-access-l7snh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.998525 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-kube-api-access-rrm76" (OuterVolumeSpecName: "kube-api-access-rrm76") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "kube-api-access-rrm76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.998720 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.999051 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-scripts" (OuterVolumeSpecName: "scripts") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:51 crc kubenswrapper[4931]: I0130 05:29:51.999128 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.000379 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1acfa9c2-a802-404e-976b-93d9f99e1fbb-kube-api-access-z5r9p" (OuterVolumeSpecName: "kube-api-access-z5r9p") pod "1acfa9c2-a802-404e-976b-93d9f99e1fbb" (UID: "1acfa9c2-a802-404e-976b-93d9f99e1fbb"). InnerVolumeSpecName "kube-api-access-z5r9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.004921 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.008672 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/081e3873-ea99-4486-925f-784a98e49405-pod-info" (OuterVolumeSpecName: "pod-info") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.024243 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data" (OuterVolumeSpecName: "config-data") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.048570 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-config-data" (OuterVolumeSpecName: "config-data") pod "1acfa9c2-a802-404e-976b-93d9f99e1fbb" (UID: "1acfa9c2-a802-404e-976b-93d9f99e1fbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.054722 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.057078 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1acfa9c2-a802-404e-976b-93d9f99e1fbb" (UID: "1acfa9c2-a802-404e-976b-93d9f99e1fbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.060480 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-server-conf" (OuterVolumeSpecName: "server-conf") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.060872 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-config-data" (OuterVolumeSpecName: "config-data") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.062655 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a337463-8b7e-496b-9a01-fc491120c21d" (UID: "8a337463-8b7e-496b-9a01-fc491120c21d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.074091 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.077590 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2d6e5156-6e75-4dff-a322-b3d43e596c7e" (UID: "2d6e5156-6e75-4dff-a322-b3d43e596c7e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.083981 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "8a337463-8b7e-496b-9a01-fc491120c21d" (UID: "8a337463-8b7e-496b-9a01-fc491120c21d"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089288 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run-ovn\") pod \"8a337463-8b7e-496b-9a01-fc491120c21d\" (UID: \"8a337463-8b7e-496b-9a01-fc491120c21d\") " Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089387 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8a337463-8b7e-496b-9a01-fc491120c21d" (UID: "8a337463-8b7e-496b-9a01-fc491120c21d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089587 4931 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089611 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089625 4931 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089637 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx75c\" (UniqueName: \"kubernetes.io/projected/2d6e5156-6e75-4dff-a322-b3d43e596c7e-kube-api-access-hx75c\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089649 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089659 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089669 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089681 4931 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089694 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089703 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089713 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a337463-8b7e-496b-9a01-fc491120c21d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089723 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5r9p\" (UniqueName: \"kubernetes.io/projected/1acfa9c2-a802-404e-976b-93d9f99e1fbb-kube-api-access-z5r9p\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089734 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089745 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a337463-8b7e-496b-9a01-fc491120c21d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089755 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7snh\" (UniqueName: \"kubernetes.io/projected/8a337463-8b7e-496b-9a01-fc491120c21d-kube-api-access-l7snh\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089766 4931 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089776 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089786 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089796 4931 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089828 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089840 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089850 4931 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a337463-8b7e-496b-9a01-fc491120c21d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089860 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6e5156-6e75-4dff-a322-b3d43e596c7e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089871 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrm76\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-kube-api-access-rrm76\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089882 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1acfa9c2-a802-404e-976b-93d9f99e1fbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089893 4931 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/081e3873-ea99-4486-925f-784a98e49405-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089903 4931 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/081e3873-ea99-4486-925f-784a98e49405-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.089913 4931 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/081e3873-ea99-4486-925f-784a98e49405-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.103566 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.107340 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "081e3873-ea99-4486-925f-784a98e49405" (UID: "081e3873-ea99-4486-925f-784a98e49405"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.191643 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/081e3873-ea99-4486-925f-784a98e49405-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.191687 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.525628 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ggjtl_8a337463-8b7e-496b-9a01-fc491120c21d/ovn-controller/0.log" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.526817 4931 generic.go:334] "Generic (PLEG): container finished" podID="8a337463-8b7e-496b-9a01-fc491120c21d" containerID="324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9" exitCode=139 Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.526925 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ggjtl" event={"ID":"8a337463-8b7e-496b-9a01-fc491120c21d","Type":"ContainerDied","Data":"324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9"} Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.527513 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ggjtl" event={"ID":"8a337463-8b7e-496b-9a01-fc491120c21d","Type":"ContainerDied","Data":"c2331a0e3efc476073fa6f72907e46cdc0fd3358dd0c363648234586881ae09d"} Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.526955 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ggjtl" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.527597 4931 scope.go:117] "RemoveContainer" containerID="324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.533115 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7c996f77-c9rqm" event={"ID":"7729e2d8-6c8c-4759-9e5d-535ad1586f47","Type":"ContainerDied","Data":"ac16bb78f3ca5ff67b0b11f3773806ecd75b6cd0b938e4013f99b8b4e7b2e044"} Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.533246 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7c996f77-c9rqm" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.550516 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" event={"ID":"728a2e60-915e-4447-9465-aa64f7f5c7bb","Type":"ContainerDied","Data":"cd9a53b66398f13fcc5edf6801d39072217390bf6fb5b5264a9e5d24f429383b"} Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.550561 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f5d456c6b-66jxb" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.557449 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97bdbd495-2prdt" event={"ID":"2d6e5156-6e75-4dff-a322-b3d43e596c7e","Type":"ContainerDied","Data":"c2c40320b6d71850a7db7d062b86807a450e4758cd147671abdfe8fd00c2df62"} Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.557470 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bdbd495-2prdt" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.564907 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"081e3873-ea99-4486-925f-784a98e49405","Type":"ContainerDied","Data":"9ce481797a1f7988304010979cd564d60b819812a50932395d5f66e51b07187f"} Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.564947 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.571171 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.571163 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1acfa9c2-a802-404e-976b-93d9f99e1fbb","Type":"ContainerDied","Data":"24154bd6bbe2da670ea864204ee97206379a1b7b92792be6f14d33757f908143"} Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.571462 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.595841 4931 scope.go:117] "RemoveContainer" containerID="324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9" Jan 30 05:29:52 crc kubenswrapper[4931]: E0130 05:29:52.598728 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9\": container with ID starting with 324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9 not found: ID does not exist" containerID="324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.598785 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9"} err="failed to get container status \"324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9\": rpc error: code = NotFound desc = could not find container \"324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9\": container with ID starting with 324b877332d5afddf300c75ec0403e36ade2235644b01dec590b1d759e7f75a9 not found: ID does not exist" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.598815 4931 scope.go:117] "RemoveContainer" containerID="2c58ff417f0ff91cddebc47f633febd5ac50ac3ddd97dcc2fce28574c94ac8a6" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.605942 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ggjtl"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.613272 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ggjtl"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.631513 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7c996f77-c9rqm"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.645260 4931 scope.go:117] "RemoveContainer" containerID="4ec5d987a513f3f04cf30f8d242bd5ee734a2387c5a279b070c18b72f4a56519" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.645589 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7c996f77-c9rqm"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.660574 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f5d456c6b-66jxb"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.668928 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5f5d456c6b-66jxb"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.674576 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.675248 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.680389 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.681527 4931 scope.go:117] "RemoveContainer" containerID="1bd0c14353cbfd196f658cae7f7167624a1cc818a0ca23ec5151f1c871a22e65" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.685744 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.692522 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-97bdbd495-2prdt"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.696704 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-97bdbd495-2prdt"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.701038 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.705179 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.709058 4931 scope.go:117] "RemoveContainer" containerID="f0f483180c30bc672edc2e00c840d52567eb9b0c61f8c285d3a7c2a185f38020" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.725158 4931 scope.go:117] "RemoveContainer" containerID="2031f531f783ff9fda1aa19098c42b8b6619a54760d8a1056a1788a1c38b669b" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.750086 4931 scope.go:117] "RemoveContainer" containerID="1bb7e19530d33f5a4cf134ca5c6644743c868cc750cac0c1bb313f0f47240dd8" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.770896 4931 scope.go:117] "RemoveContainer" containerID="4db4fc560f1e0be65146b56bdc4340b3ff1c5a4fe7510a353795090f99291213" Jan 30 05:29:52 crc kubenswrapper[4931]: I0130 05:29:52.800741 4931 scope.go:117] "RemoveContainer" containerID="cc83b1f403d157f95969723a88f60d5874181dc078497768c1cec64f4187dd52" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.439153 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081e3873-ea99-4486-925f-784a98e49405" path="/var/lib/kubelet/pods/081e3873-ea99-4486-925f-784a98e49405/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.439727 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1acfa9c2-a802-404e-976b-93d9f99e1fbb" path="/var/lib/kubelet/pods/1acfa9c2-a802-404e-976b-93d9f99e1fbb/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.440170 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6e5156-6e75-4dff-a322-b3d43e596c7e" path="/var/lib/kubelet/pods/2d6e5156-6e75-4dff-a322-b3d43e596c7e/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.441144 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" path="/var/lib/kubelet/pods/586d7a1d-7b2a-45ac-aacb-b77e95bf3d91/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.441761 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" path="/var/lib/kubelet/pods/728a2e60-915e-4447-9465-aa64f7f5c7bb/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.442318 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" path="/var/lib/kubelet/pods/7729e2d8-6c8c-4759-9e5d-535ad1586f47/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.443458 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" path="/var/lib/kubelet/pods/8a337463-8b7e-496b-9a01-fc491120c21d/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.444127 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" path="/var/lib/kubelet/pods/cacfcbd5-8c12-4fc5-88ce-516fda23464d/volumes" Jan 30 05:29:53 crc kubenswrapper[4931]: I0130 05:29:53.567129 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-75d9f6f6ff-kmswn" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.165:9696/\": dial tcp 10.217.0.165:9696: connect: connection refused" Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.341462 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.342359 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.342835 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.342942 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.344185 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.345755 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.347624 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:54 crc kubenswrapper[4931]: E0130 05:29:54.347700 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:29:55 crc kubenswrapper[4931]: I0130 05:29:55.326024 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: i/o timeout" Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.363228 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.363317 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.363385 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.364476 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"083c2726f719c1b6c228fc0d209a309a403985263c1ced3ea0982529442fd973"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.364604 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://083c2726f719c1b6c228fc0d209a309a403985263c1ced3ea0982529442fd973" gracePeriod=600 Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.637987 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="083c2726f719c1b6c228fc0d209a309a403985263c1ced3ea0982529442fd973" exitCode=0 Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.638044 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"083c2726f719c1b6c228fc0d209a309a403985263c1ced3ea0982529442fd973"} Jan 30 05:29:57 crc kubenswrapper[4931]: I0130 05:29:57.638289 4931 scope.go:117] "RemoveContainer" containerID="a45fd242a77041b5be27fe445a509a614e0332f92cf4e23ef129ae6c3582244f" Jan 30 05:29:58 crc kubenswrapper[4931]: I0130 05:29:58.676333 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0"} Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.341204 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.341754 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.342095 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.342158 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.343948 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.345707 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.347192 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:29:59 crc kubenswrapper[4931]: E0130 05:29:59.347299 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177249 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv"] Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177717 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1acfa9c2-a802-404e-976b-93d9f99e1fbb" containerName="nova-scheduler-scheduler" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177738 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1acfa9c2-a802-404e-976b-93d9f99e1fbb" containerName="nova-scheduler-scheduler" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177758 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-metadata" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177770 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-metadata" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177795 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" containerName="mariadb-account-create-update" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177807 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" containerName="mariadb-account-create-update" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177824 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="openstack-network-exporter" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177836 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="openstack-network-exporter" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177852 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="sg-core" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177864 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="sg-core" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177887 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerName="rabbitmq" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177899 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerName="rabbitmq" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177920 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177934 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177949 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" containerName="memcached" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177961 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" containerName="memcached" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.177978 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.177990 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178002 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="proxy-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178013 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="proxy-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178032 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178045 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178069 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerName="galera" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178080 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerName="galera" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178094 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178105 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178123 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178135 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178157 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178169 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178185 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178196 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-api" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178220 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="ovn-northd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178232 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="ovn-northd" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178244 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178256 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178271 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerName="setup-container" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178283 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerName="setup-container" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178304 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178317 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178338 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178350 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-api" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178371 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178383 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178401 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081e3873-ea99-4486-925f-784a98e49405" containerName="setup-container" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178413 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="081e3873-ea99-4486-925f-784a98e49405" containerName="setup-container" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178454 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-notification-agent" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178467 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-notification-agent" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178491 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178504 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178527 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178539 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178555 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6e5156-6e75-4dff-a322-b3d43e596c7e" containerName="keystone-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178568 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6e5156-6e75-4dff-a322-b3d43e596c7e" containerName="keystone-api" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178583 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" containerName="mariadb-account-create-update" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178596 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" containerName="mariadb-account-create-update" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178618 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerName="mysql-bootstrap" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178630 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerName="mysql-bootstrap" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178642 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178653 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178668 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f9b591-fea6-4010-99db-45eef2237cdc" containerName="kube-state-metrics" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178680 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f9b591-fea6-4010-99db-45eef2237cdc" containerName="kube-state-metrics" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178700 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081e3873-ea99-4486-925f-784a98e49405" containerName="rabbitmq" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178714 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="081e3873-ea99-4486-925f-784a98e49405" containerName="rabbitmq" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178729 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178740 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker-log" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178759 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178771 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker" Jan 30 05:30:00 crc kubenswrapper[4931]: E0130 05:30:00.178787 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-central-agent" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.178799 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-central-agent" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179036 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1acfa9c2-a802-404e-976b-93d9f99e1fbb" containerName="nova-scheduler-scheduler" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179058 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" containerName="mariadb-account-create-update" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179077 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="ovn-northd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179099 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179122 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179143 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="348ffd7a-9b7f-40aa-ada9-145a3a783d09" containerName="galera" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179162 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7729e2d8-6c8c-4759-9e5d-535ad1586f47" containerName="barbican-worker-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179181 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179203 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="586d7a1d-7b2a-45ac-aacb-b77e95bf3d91" containerName="openstack-network-exporter" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179227 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-notification-agent" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179244 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f9b591-fea6-4010-99db-45eef2237cdc" containerName="kube-state-metrics" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179256 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a337463-8b7e-496b-9a01-fc491120c21d" containerName="ovn-controller" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179275 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179297 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="081e3873-ea99-4486-925f-784a98e49405" containerName="rabbitmq" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179319 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe4f743-9a60-428f-8b58-14ba160d9fd7" containerName="placement-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179340 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179358 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="sg-core" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179377 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="728a2e60-915e-4447-9465-aa64f7f5c7bb" containerName="barbican-keystone-listener-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179391 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc265a8-34e2-4ec9-bdd5-69d75ea14bba" containerName="memcached" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179411 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6e5156-6e75-4dff-a322-b3d43e596c7e" containerName="keystone-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179452 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-metadata" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179473 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179493 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="58928fea-709c-44d8-bd12-23937da8e2c4" containerName="barbican-api-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179509 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3415cfc4-a71a-4110-bf82-295181bb386f" containerName="glance-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179524 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-api" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179538 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3f4796-66b1-452b-afca-5e62cbf2a53b" containerName="rabbitmq" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179586 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="ceilometer-central-agent" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179606 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacfcbd5-8c12-4fc5-88ce-516fda23464d" containerName="proxy-httpd" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179626 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="406c25f3-c398-4ace-ba4b-1d9b48b289a2" containerName="nova-api-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179652 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e6d6a8-599b-4ab9-b1f7-cf521e455d74" containerName="nova-metadata-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.179677 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0ddaec-4521-4898-8649-262b52f24acb" containerName="glance-log" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.180546 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.185495 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.193787 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.204081 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv"] Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.243374 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkswv\" (UniqueName: \"kubernetes.io/projected/09a19500-eb44-455f-a8b7-7ee5375b87ef-kube-api-access-dkswv\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.243647 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09a19500-eb44-455f-a8b7-7ee5375b87ef-config-volume\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.243791 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09a19500-eb44-455f-a8b7-7ee5375b87ef-secret-volume\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.344761 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09a19500-eb44-455f-a8b7-7ee5375b87ef-secret-volume\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.344916 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkswv\" (UniqueName: \"kubernetes.io/projected/09a19500-eb44-455f-a8b7-7ee5375b87ef-kube-api-access-dkswv\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.344946 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09a19500-eb44-455f-a8b7-7ee5375b87ef-config-volume\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.345929 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09a19500-eb44-455f-a8b7-7ee5375b87ef-config-volume\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.351946 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09a19500-eb44-455f-a8b7-7ee5375b87ef-secret-volume\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.368983 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkswv\" (UniqueName: \"kubernetes.io/projected/09a19500-eb44-455f-a8b7-7ee5375b87ef-kube-api-access-dkswv\") pod \"collect-profiles-29495850-8t6xv\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.518284 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:00 crc kubenswrapper[4931]: I0130 05:30:00.821388 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv"] Jan 30 05:30:00 crc kubenswrapper[4931]: W0130 05:30:00.831745 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09a19500_eb44_455f_a8b7_7ee5375b87ef.slice/crio-1b851dd5da777878c4407279b086c7ba0baadc7fa13f64016500cc321a47e3f9 WatchSource:0}: Error finding container 1b851dd5da777878c4407279b086c7ba0baadc7fa13f64016500cc321a47e3f9: Status 404 returned error can't find the container with id 1b851dd5da777878c4407279b086c7ba0baadc7fa13f64016500cc321a47e3f9 Jan 30 05:30:01 crc kubenswrapper[4931]: I0130 05:30:01.716844 4931 generic.go:334] "Generic (PLEG): container finished" podID="09a19500-eb44-455f-a8b7-7ee5375b87ef" containerID="f1cc6685442d84c78caf7ee74e69ba6f0a12fa18a641f9f2d8eb2d03f2ae6e04" exitCode=0 Jan 30 05:30:01 crc kubenswrapper[4931]: I0130 05:30:01.716957 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" event={"ID":"09a19500-eb44-455f-a8b7-7ee5375b87ef","Type":"ContainerDied","Data":"f1cc6685442d84c78caf7ee74e69ba6f0a12fa18a641f9f2d8eb2d03f2ae6e04"} Jan 30 05:30:01 crc kubenswrapper[4931]: I0130 05:30:01.717288 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" event={"ID":"09a19500-eb44-455f-a8b7-7ee5375b87ef","Type":"ContainerStarted","Data":"1b851dd5da777878c4407279b086c7ba0baadc7fa13f64016500cc321a47e3f9"} Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.175979 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.193972 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkswv\" (UniqueName: \"kubernetes.io/projected/09a19500-eb44-455f-a8b7-7ee5375b87ef-kube-api-access-dkswv\") pod \"09a19500-eb44-455f-a8b7-7ee5375b87ef\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.194062 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09a19500-eb44-455f-a8b7-7ee5375b87ef-secret-volume\") pod \"09a19500-eb44-455f-a8b7-7ee5375b87ef\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.194099 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09a19500-eb44-455f-a8b7-7ee5375b87ef-config-volume\") pod \"09a19500-eb44-455f-a8b7-7ee5375b87ef\" (UID: \"09a19500-eb44-455f-a8b7-7ee5375b87ef\") " Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.194953 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a19500-eb44-455f-a8b7-7ee5375b87ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "09a19500-eb44-455f-a8b7-7ee5375b87ef" (UID: "09a19500-eb44-455f-a8b7-7ee5375b87ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.210182 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a19500-eb44-455f-a8b7-7ee5375b87ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "09a19500-eb44-455f-a8b7-7ee5375b87ef" (UID: "09a19500-eb44-455f-a8b7-7ee5375b87ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.210874 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a19500-eb44-455f-a8b7-7ee5375b87ef-kube-api-access-dkswv" (OuterVolumeSpecName: "kube-api-access-dkswv") pod "09a19500-eb44-455f-a8b7-7ee5375b87ef" (UID: "09a19500-eb44-455f-a8b7-7ee5375b87ef"). InnerVolumeSpecName "kube-api-access-dkswv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.296128 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkswv\" (UniqueName: \"kubernetes.io/projected/09a19500-eb44-455f-a8b7-7ee5375b87ef-kube-api-access-dkswv\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.296194 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/09a19500-eb44-455f-a8b7-7ee5375b87ef-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.296214 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09a19500-eb44-455f-a8b7-7ee5375b87ef-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.744438 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.744472 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv" event={"ID":"09a19500-eb44-455f-a8b7-7ee5375b87ef","Type":"ContainerDied","Data":"1b851dd5da777878c4407279b086c7ba0baadc7fa13f64016500cc321a47e3f9"} Jan 30 05:30:03 crc kubenswrapper[4931]: I0130 05:30:03.744526 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b851dd5da777878c4407279b086c7ba0baadc7fa13f64016500cc321a47e3f9" Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.341806 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.342807 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.343397 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.343497 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.345001 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.347497 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.349739 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:04 crc kubenswrapper[4931]: E0130 05:30:04.349832 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.649392 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.668980 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-internal-tls-certs\") pod \"e1f9790c-c395-4c72-b569-3140f703b56f\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.669126 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-httpd-config\") pod \"e1f9790c-c395-4c72-b569-3140f703b56f\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.669173 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-public-tls-certs\") pod \"e1f9790c-c395-4c72-b569-3140f703b56f\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.669250 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-combined-ca-bundle\") pod \"e1f9790c-c395-4c72-b569-3140f703b56f\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.669292 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c78ch\" (UniqueName: \"kubernetes.io/projected/e1f9790c-c395-4c72-b569-3140f703b56f-kube-api-access-c78ch\") pod \"e1f9790c-c395-4c72-b569-3140f703b56f\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.669339 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-config\") pod \"e1f9790c-c395-4c72-b569-3140f703b56f\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.669378 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-ovndb-tls-certs\") pod \"e1f9790c-c395-4c72-b569-3140f703b56f\" (UID: \"e1f9790c-c395-4c72-b569-3140f703b56f\") " Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.678533 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e1f9790c-c395-4c72-b569-3140f703b56f" (UID: "e1f9790c-c395-4c72-b569-3140f703b56f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.680480 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f9790c-c395-4c72-b569-3140f703b56f-kube-api-access-c78ch" (OuterVolumeSpecName: "kube-api-access-c78ch") pod "e1f9790c-c395-4c72-b569-3140f703b56f" (UID: "e1f9790c-c395-4c72-b569-3140f703b56f"). InnerVolumeSpecName "kube-api-access-c78ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.724401 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-config" (OuterVolumeSpecName: "config") pod "e1f9790c-c395-4c72-b569-3140f703b56f" (UID: "e1f9790c-c395-4c72-b569-3140f703b56f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.737078 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e1f9790c-c395-4c72-b569-3140f703b56f" (UID: "e1f9790c-c395-4c72-b569-3140f703b56f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.739211 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1f9790c-c395-4c72-b569-3140f703b56f" (UID: "e1f9790c-c395-4c72-b569-3140f703b56f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.743269 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e1f9790c-c395-4c72-b569-3140f703b56f" (UID: "e1f9790c-c395-4c72-b569-3140f703b56f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.771665 4931 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.771723 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.771743 4931 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.771808 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.771829 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c78ch\" (UniqueName: \"kubernetes.io/projected/e1f9790c-c395-4c72-b569-3140f703b56f-kube-api-access-c78ch\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.771851 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-config\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.772397 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e1f9790c-c395-4c72-b569-3140f703b56f" (UID: "e1f9790c-c395-4c72-b569-3140f703b56f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.777347 4931 generic.go:334] "Generic (PLEG): container finished" podID="e1f9790c-c395-4c72-b569-3140f703b56f" containerID="59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22" exitCode=0 Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.777454 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75d9f6f6ff-kmswn" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.777538 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d9f6f6ff-kmswn" event={"ID":"e1f9790c-c395-4c72-b569-3140f703b56f","Type":"ContainerDied","Data":"59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22"} Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.777644 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75d9f6f6ff-kmswn" event={"ID":"e1f9790c-c395-4c72-b569-3140f703b56f","Type":"ContainerDied","Data":"7ca85a404546ccf8741ecd606d0270826591549f7f7979a41ae78e99d2986a63"} Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.777709 4931 scope.go:117] "RemoveContainer" containerID="e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.813788 4931 scope.go:117] "RemoveContainer" containerID="59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.815469 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-75d9f6f6ff-kmswn"] Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.820633 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-75d9f6f6ff-kmswn"] Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.835345 4931 scope.go:117] "RemoveContainer" containerID="e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e" Jan 30 05:30:05 crc kubenswrapper[4931]: E0130 05:30:05.835813 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e\": container with ID starting with e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e not found: ID does not exist" containerID="e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.835881 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e"} err="failed to get container status \"e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e\": rpc error: code = NotFound desc = could not find container \"e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e\": container with ID starting with e66d09407f90c54520f8315b166cae42a909127ad176413e644d8b4d72518f5e not found: ID does not exist" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.835920 4931 scope.go:117] "RemoveContainer" containerID="59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22" Jan 30 05:30:05 crc kubenswrapper[4931]: E0130 05:30:05.836302 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22\": container with ID starting with 59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22 not found: ID does not exist" containerID="59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.836335 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22"} err="failed to get container status \"59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22\": rpc error: code = NotFound desc = could not find container \"59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22\": container with ID starting with 59911ed8b4d4f6e717a3a6a2f0ab4674f62b93e56bb563bd9ec43bf578791b22 not found: ID does not exist" Jan 30 05:30:05 crc kubenswrapper[4931]: I0130 05:30:05.873739 4931 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f9790c-c395-4c72-b569-3140f703b56f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:07 crc kubenswrapper[4931]: I0130 05:30:07.440389 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" path="/var/lib/kubelet/pods/e1f9790c-c395-4c72-b569-3140f703b56f/volumes" Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.343242 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.344107 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.344946 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.345015 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.346621 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.349403 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.352079 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:09 crc kubenswrapper[4931]: E0130 05:30:09.352177 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:30:13 crc kubenswrapper[4931]: E0130 05:30:13.886790 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52577244_c181_4919_b5b0_040e229163db.slice/crio-conmon-7960131bc61ab6450751b905e24e2ccae8d9fe2d400984f5011874b3859c6719.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52577244_c181_4919_b5b0_040e229163db.slice/crio-7960131bc61ab6450751b905e24e2ccae8d9fe2d400984f5011874b3859c6719.scope\": RecentStats: unable to find data in memory cache]" Jan 30 05:30:13 crc kubenswrapper[4931]: I0130 05:30:13.895719 4931 generic.go:334] "Generic (PLEG): container finished" podID="52577244-c181-4919-b5b0-040e229163db" containerID="7960131bc61ab6450751b905e24e2ccae8d9fe2d400984f5011874b3859c6719" exitCode=137 Jan 30 05:30:13 crc kubenswrapper[4931]: I0130 05:30:13.895770 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"7960131bc61ab6450751b905e24e2ccae8d9fe2d400984f5011874b3859c6719"} Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.232449 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.313225 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") pod \"52577244-c181-4919-b5b0-040e229163db\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.313316 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"52577244-c181-4919-b5b0-040e229163db\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.313395 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-lock\") pod \"52577244-c181-4919-b5b0-040e229163db\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.313495 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-cache\") pod \"52577244-c181-4919-b5b0-040e229163db\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.313528 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52577244-c181-4919-b5b0-040e229163db-combined-ca-bundle\") pod \"52577244-c181-4919-b5b0-040e229163db\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.313586 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56w5l\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-kube-api-access-56w5l\") pod \"52577244-c181-4919-b5b0-040e229163db\" (UID: \"52577244-c181-4919-b5b0-040e229163db\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.314373 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-lock" (OuterVolumeSpecName: "lock") pod "52577244-c181-4919-b5b0-040e229163db" (UID: "52577244-c181-4919-b5b0-040e229163db"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.314355 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-cache" (OuterVolumeSpecName: "cache") pod "52577244-c181-4919-b5b0-040e229163db" (UID: "52577244-c181-4919-b5b0-040e229163db"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.314720 4931 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-lock\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.331476 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "52577244-c181-4919-b5b0-040e229163db" (UID: "52577244-c181-4919-b5b0-040e229163db"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.331494 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "swift") pod "52577244-c181-4919-b5b0-040e229163db" (UID: "52577244-c181-4919-b5b0-040e229163db"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.331536 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-kube-api-access-56w5l" (OuterVolumeSpecName: "kube-api-access-56w5l") pod "52577244-c181-4919-b5b0-040e229163db" (UID: "52577244-c181-4919-b5b0-040e229163db"). InnerVolumeSpecName "kube-api-access-56w5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.341841 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.341985 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17 is running failed: container process not found" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.342442 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17 is running failed: container process not found" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.342532 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.342727 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17 is running failed: container process not found" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.342771 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.343094 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 30 05:30:14 crc kubenswrapper[4931]: E0130 05:30:14.343117 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-thxc2" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.415393 4931 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/52577244-c181-4919-b5b0-040e229163db-cache\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.415489 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56w5l\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-kube-api-access-56w5l\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.415505 4931 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52577244-c181-4919-b5b0-040e229163db-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.415587 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.434642 4931 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.517203 4931 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.552018 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-thxc2_5732e34e-6330-4a36-9082-dbb50eede9f2/ovs-vswitchd/0.log" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.554744 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618250 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-etc-ovs\") pod \"5732e34e-6330-4a36-9082-dbb50eede9f2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618323 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-log\") pod \"5732e34e-6330-4a36-9082-dbb50eede9f2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618336 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "5732e34e-6330-4a36-9082-dbb50eede9f2" (UID: "5732e34e-6330-4a36-9082-dbb50eede9f2"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-lib\") pod \"5732e34e-6330-4a36-9082-dbb50eede9f2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618415 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-lib" (OuterVolumeSpecName: "var-lib") pod "5732e34e-6330-4a36-9082-dbb50eede9f2" (UID: "5732e34e-6330-4a36-9082-dbb50eede9f2"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618479 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-log" (OuterVolumeSpecName: "var-log") pod "5732e34e-6330-4a36-9082-dbb50eede9f2" (UID: "5732e34e-6330-4a36-9082-dbb50eede9f2"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618570 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5732e34e-6330-4a36-9082-dbb50eede9f2-scripts\") pod \"5732e34e-6330-4a36-9082-dbb50eede9f2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618628 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-run\") pod \"5732e34e-6330-4a36-9082-dbb50eede9f2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618660 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d259g\" (UniqueName: \"kubernetes.io/projected/5732e34e-6330-4a36-9082-dbb50eede9f2-kube-api-access-d259g\") pod \"5732e34e-6330-4a36-9082-dbb50eede9f2\" (UID: \"5732e34e-6330-4a36-9082-dbb50eede9f2\") " Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.618733 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-run" (OuterVolumeSpecName: "var-run") pod "5732e34e-6330-4a36-9082-dbb50eede9f2" (UID: "5732e34e-6330-4a36-9082-dbb50eede9f2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.619090 4931 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.619149 4931 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.619161 4931 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-lib\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.619202 4931 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5732e34e-6330-4a36-9082-dbb50eede9f2-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.620696 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5732e34e-6330-4a36-9082-dbb50eede9f2-scripts" (OuterVolumeSpecName: "scripts") pod "5732e34e-6330-4a36-9082-dbb50eede9f2" (UID: "5732e34e-6330-4a36-9082-dbb50eede9f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.632654 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5732e34e-6330-4a36-9082-dbb50eede9f2-kube-api-access-d259g" (OuterVolumeSpecName: "kube-api-access-d259g") pod "5732e34e-6330-4a36-9082-dbb50eede9f2" (UID: "5732e34e-6330-4a36-9082-dbb50eede9f2"). InnerVolumeSpecName "kube-api-access-d259g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.704167 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52577244-c181-4919-b5b0-040e229163db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52577244-c181-4919-b5b0-040e229163db" (UID: "52577244-c181-4919-b5b0-040e229163db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.721049 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5732e34e-6330-4a36-9082-dbb50eede9f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.721091 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d259g\" (UniqueName: \"kubernetes.io/projected/5732e34e-6330-4a36-9082-dbb50eede9f2-kube-api-access-d259g\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.721106 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52577244-c181-4919-b5b0-040e229163db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.921950 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"52577244-c181-4919-b5b0-040e229163db","Type":"ContainerDied","Data":"8f709bd92c7c6c28297de5f91b3d8f5726929abc3fede49c29940651ade456cb"} Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.922160 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.922206 4931 scope.go:117] "RemoveContainer" containerID="7960131bc61ab6450751b905e24e2ccae8d9fe2d400984f5011874b3859c6719" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.927351 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-thxc2_5732e34e-6330-4a36-9082-dbb50eede9f2/ovs-vswitchd/0.log" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.929241 4931 generic.go:334] "Generic (PLEG): container finished" podID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" exitCode=137 Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.929303 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-thxc2" event={"ID":"5732e34e-6330-4a36-9082-dbb50eede9f2","Type":"ContainerDied","Data":"52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17"} Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.929318 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-thxc2" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.929342 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-thxc2" event={"ID":"5732e34e-6330-4a36-9082-dbb50eede9f2","Type":"ContainerDied","Data":"f259a70451b1edb6023ad4c42bb1037e4e2cbc756eed3599105a7d0ba07dc5ac"} Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.976144 4931 scope.go:117] "RemoveContainer" containerID="fa1a93081b269f4ada317cd8046ebb7f1a7c1edf1f6e97c13ed393eacd7e1973" Jan 30 05:30:14 crc kubenswrapper[4931]: I0130 05:30:14.982602 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.008730 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.019209 4931 scope.go:117] "RemoveContainer" containerID="cf71a5f4e5a8611b3edb8a350ba0e2eedbd78c0fb76770c94841152df4a3ab69" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.021344 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-thxc2"] Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.031723 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-thxc2"] Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.044622 4931 scope.go:117] "RemoveContainer" containerID="577bb47efe5f44d38e3c888fdf879028a229599eb11554a344d6a077afa58802" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.075971 4931 scope.go:117] "RemoveContainer" containerID="2aa8176e0269c78ed82e92b582f8a0a44311ad87daa1079e948a15315c72207f" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.113811 4931 scope.go:117] "RemoveContainer" containerID="cc9cceab2cf461936102038fbf1707f4be2e195decb5808193c7f51c4adb08d3" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.130251 4931 scope.go:117] "RemoveContainer" containerID="b4e57fcd32132c1d41dc41783803f5dfdbd53a0317437ca189732c5c62a33471" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.148806 4931 scope.go:117] "RemoveContainer" containerID="6088dfc85b1e09a936dd16faccae994e80e2ccb29840c2c09302842b83328fc2" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.166791 4931 scope.go:117] "RemoveContainer" containerID="072ac216076cb0fec2ec21789975f4b6fdf297b846d9774f980cb280a52a2718" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.191020 4931 scope.go:117] "RemoveContainer" containerID="840bb675eae49d372214aa49017516eb7cc03feb5f0cebb6fb56a2dd4d0837b9" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.209451 4931 scope.go:117] "RemoveContainer" containerID="01987d0b4f025a347544f55c09ea6cf4f3249f746d37222f35ee196eb4525b63" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.226666 4931 scope.go:117] "RemoveContainer" containerID="9eac75907fb7af02eb9159bedaf64c4ca7dee04ca441b549c3a48132b186515f" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.251259 4931 scope.go:117] "RemoveContainer" containerID="64945c3ef451f83b413801249e5cc8cedcb622a5a129ee80defec98e393eed29" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.275143 4931 scope.go:117] "RemoveContainer" containerID="de7939eb8c76f478b34a03e7dd08f4a308dc8bbb63a287db0a5b3eec8794cc7c" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.295268 4931 scope.go:117] "RemoveContainer" containerID="e78b14aff0684ab7de691a18e86ce169b9e67b8f02342d8eba7927de4cb39ec6" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.331135 4931 scope.go:117] "RemoveContainer" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.355348 4931 scope.go:117] "RemoveContainer" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.380674 4931 scope.go:117] "RemoveContainer" containerID="f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.417785 4931 scope.go:117] "RemoveContainer" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" Jan 30 05:30:15 crc kubenswrapper[4931]: E0130 05:30:15.418457 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17\": container with ID starting with 52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17 not found: ID does not exist" containerID="52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.418512 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17"} err="failed to get container status \"52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17\": rpc error: code = NotFound desc = could not find container \"52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17\": container with ID starting with 52b5db4015b0059c26ae8bc17932b213ec5274f0f558128594eedffdbd557f17 not found: ID does not exist" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.418548 4931 scope.go:117] "RemoveContainer" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" Jan 30 05:30:15 crc kubenswrapper[4931]: E0130 05:30:15.419358 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1\": container with ID starting with ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 not found: ID does not exist" containerID="ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.419418 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1"} err="failed to get container status \"ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1\": rpc error: code = NotFound desc = could not find container \"ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1\": container with ID starting with ebc48ed9003a3c2930951ff8ca463098a233a76571845ef3bd7d328cc44868c1 not found: ID does not exist" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.419495 4931 scope.go:117] "RemoveContainer" containerID="f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054" Jan 30 05:30:15 crc kubenswrapper[4931]: E0130 05:30:15.420111 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054\": container with ID starting with f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054 not found: ID does not exist" containerID="f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.420176 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054"} err="failed to get container status \"f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054\": rpc error: code = NotFound desc = could not find container \"f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054\": container with ID starting with f760d9a2256038a984bdc2bea867eb0919b319e2fcb8d4e368c3e9c063a65054 not found: ID does not exist" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.445328 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52577244-c181-4919-b5b0-040e229163db" path="/var/lib/kubelet/pods/52577244-c181-4919-b5b0-040e229163db/volumes" Jan 30 05:30:15 crc kubenswrapper[4931]: I0130 05:30:15.450023 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" path="/var/lib/kubelet/pods/5732e34e-6330-4a36-9082-dbb50eede9f2/volumes" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.474821 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.479882 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671252 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-logs\") pod \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671388 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-combined-ca-bundle\") pod \"9ac55021-a07e-443f-9ee9-e7516556b975\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671505 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data\") pod \"9ac55021-a07e-443f-9ee9-e7516556b975\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671608 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwf5\" (UniqueName: \"kubernetes.io/projected/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-kube-api-access-jpwf5\") pod \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671685 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data-custom\") pod \"9ac55021-a07e-443f-9ee9-e7516556b975\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671729 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data-custom\") pod \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671784 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-combined-ca-bundle\") pod \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.671878 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ac55021-a07e-443f-9ee9-e7516556b975-logs\") pod \"9ac55021-a07e-443f-9ee9-e7516556b975\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.672059 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data\") pod \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\" (UID: \"c0c7aeee-9023-433a-83d0-aa0e9942a0ed\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.672141 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mc9v\" (UniqueName: \"kubernetes.io/projected/9ac55021-a07e-443f-9ee9-e7516556b975-kube-api-access-7mc9v\") pod \"9ac55021-a07e-443f-9ee9-e7516556b975\" (UID: \"9ac55021-a07e-443f-9ee9-e7516556b975\") " Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.672553 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac55021-a07e-443f-9ee9-e7516556b975-logs" (OuterVolumeSpecName: "logs") pod "9ac55021-a07e-443f-9ee9-e7516556b975" (UID: "9ac55021-a07e-443f-9ee9-e7516556b975"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.672773 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ac55021-a07e-443f-9ee9-e7516556b975-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.673139 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-logs" (OuterVolumeSpecName: "logs") pod "c0c7aeee-9023-433a-83d0-aa0e9942a0ed" (UID: "c0c7aeee-9023-433a-83d0-aa0e9942a0ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.677829 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c0c7aeee-9023-433a-83d0-aa0e9942a0ed" (UID: "c0c7aeee-9023-433a-83d0-aa0e9942a0ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.678316 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac55021-a07e-443f-9ee9-e7516556b975-kube-api-access-7mc9v" (OuterVolumeSpecName: "kube-api-access-7mc9v") pod "9ac55021-a07e-443f-9ee9-e7516556b975" (UID: "9ac55021-a07e-443f-9ee9-e7516556b975"). InnerVolumeSpecName "kube-api-access-7mc9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.679318 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ac55021-a07e-443f-9ee9-e7516556b975" (UID: "9ac55021-a07e-443f-9ee9-e7516556b975"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.683599 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-kube-api-access-jpwf5" (OuterVolumeSpecName: "kube-api-access-jpwf5") pod "c0c7aeee-9023-433a-83d0-aa0e9942a0ed" (UID: "c0c7aeee-9023-433a-83d0-aa0e9942a0ed"). InnerVolumeSpecName "kube-api-access-jpwf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.698791 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ac55021-a07e-443f-9ee9-e7516556b975" (UID: "9ac55021-a07e-443f-9ee9-e7516556b975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.711617 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0c7aeee-9023-433a-83d0-aa0e9942a0ed" (UID: "c0c7aeee-9023-433a-83d0-aa0e9942a0ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.730933 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data" (OuterVolumeSpecName: "config-data") pod "c0c7aeee-9023-433a-83d0-aa0e9942a0ed" (UID: "c0c7aeee-9023-433a-83d0-aa0e9942a0ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.735691 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data" (OuterVolumeSpecName: "config-data") pod "9ac55021-a07e-443f-9ee9-e7516556b975" (UID: "9ac55021-a07e-443f-9ee9-e7516556b975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775068 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775125 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775146 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mc9v\" (UniqueName: \"kubernetes.io/projected/9ac55021-a07e-443f-9ee9-e7516556b975-kube-api-access-7mc9v\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775168 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-logs\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775186 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775203 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775221 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpwf5\" (UniqueName: \"kubernetes.io/projected/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-kube-api-access-jpwf5\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775238 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ac55021-a07e-443f-9ee9-e7516556b975-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.775255 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c0c7aeee-9023-433a-83d0-aa0e9942a0ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.978690 4931 generic.go:334] "Generic (PLEG): container finished" podID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerID="a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33" exitCode=137 Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.978759 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7789bbd757-45b5w" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.978789 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7789bbd757-45b5w" event={"ID":"c0c7aeee-9023-433a-83d0-aa0e9942a0ed","Type":"ContainerDied","Data":"a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33"} Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.978886 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7789bbd757-45b5w" event={"ID":"c0c7aeee-9023-433a-83d0-aa0e9942a0ed","Type":"ContainerDied","Data":"b30436eda9ab254987a1049643d0e45f01f12d10b3f44f43863aa93c4c7ce86b"} Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.978917 4931 scope.go:117] "RemoveContainer" containerID="a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33" Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.983954 4931 generic.go:334] "Generic (PLEG): container finished" podID="9ac55021-a07e-443f-9ee9-e7516556b975" containerID="9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae" exitCode=137 Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.984030 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" event={"ID":"9ac55021-a07e-443f-9ee9-e7516556b975","Type":"ContainerDied","Data":"9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae"} Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.984083 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" event={"ID":"9ac55021-a07e-443f-9ee9-e7516556b975","Type":"ContainerDied","Data":"ddc1c9e389f315057ec0a85201907373bcd7582adeb9a6f356d1b36e03264dc9"} Jan 30 05:30:17 crc kubenswrapper[4931]: I0130 05:30:17.984189 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-867d8cd54-77bnr" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.015815 4931 scope.go:117] "RemoveContainer" containerID="eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.032933 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-867d8cd54-77bnr"] Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.044321 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-867d8cd54-77bnr"] Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.054363 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7789bbd757-45b5w"] Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.054439 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7789bbd757-45b5w"] Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.120124 4931 scope.go:117] "RemoveContainer" containerID="a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33" Jan 30 05:30:18 crc kubenswrapper[4931]: E0130 05:30:18.120782 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33\": container with ID starting with a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33 not found: ID does not exist" containerID="a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.120834 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33"} err="failed to get container status \"a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33\": rpc error: code = NotFound desc = could not find container \"a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33\": container with ID starting with a93d68f310310e7303c3edab109a8ed880ebee37b66fadef54a1cd3f06192f33 not found: ID does not exist" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.120874 4931 scope.go:117] "RemoveContainer" containerID="eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123" Jan 30 05:30:18 crc kubenswrapper[4931]: E0130 05:30:18.121356 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123\": container with ID starting with eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123 not found: ID does not exist" containerID="eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.121418 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123"} err="failed to get container status \"eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123\": rpc error: code = NotFound desc = could not find container \"eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123\": container with ID starting with eaa8e6737e823302afed9a1ee3304e390f004c6a3502f957764d475d51b27123 not found: ID does not exist" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.121484 4931 scope.go:117] "RemoveContainer" containerID="9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.152015 4931 scope.go:117] "RemoveContainer" containerID="1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.179460 4931 scope.go:117] "RemoveContainer" containerID="9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae" Jan 30 05:30:18 crc kubenswrapper[4931]: E0130 05:30:18.180115 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae\": container with ID starting with 9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae not found: ID does not exist" containerID="9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.180178 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae"} err="failed to get container status \"9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae\": rpc error: code = NotFound desc = could not find container \"9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae\": container with ID starting with 9f630d7d16f08bd8c63c99b99649d0bf49fe0c9e741e4be3226750e5334469ae not found: ID does not exist" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.180218 4931 scope.go:117] "RemoveContainer" containerID="1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d" Jan 30 05:30:18 crc kubenswrapper[4931]: E0130 05:30:18.180966 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d\": container with ID starting with 1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d not found: ID does not exist" containerID="1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d" Jan 30 05:30:18 crc kubenswrapper[4931]: I0130 05:30:18.181034 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d"} err="failed to get container status \"1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d\": rpc error: code = NotFound desc = could not find container \"1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d\": container with ID starting with 1f6797a8cd9dc5bb3661053087d443c4acbfc334ea75b6f96caee3ee4286c94d not found: ID does not exist" Jan 30 05:30:19 crc kubenswrapper[4931]: I0130 05:30:19.440454 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" path="/var/lib/kubelet/pods/9ac55021-a07e-443f-9ee9-e7516556b975/volumes" Jan 30 05:30:19 crc kubenswrapper[4931]: I0130 05:30:19.442202 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" path="/var/lib/kubelet/pods/c0c7aeee-9023-433a-83d0-aa0e9942a0ed/volumes" Jan 30 05:30:21 crc kubenswrapper[4931]: I0130 05:30:21.525985 4931 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod586d7a1d-7b2a-45ac-aacb-b77e95bf3d91"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod586d7a1d-7b2a-45ac-aacb-b77e95bf3d91] : Timed out while waiting for systemd to remove kubepods-besteffort-pod586d7a1d_7b2a_45ac_aacb_b77e95bf3d91.slice" Jan 30 05:30:47 crc kubenswrapper[4931]: I0130 05:30:47.985631 4931 scope.go:117] "RemoveContainer" containerID="c1a9c2221cdd695e864a018795817226f2731d9a93a0ec46938ca160bd878ce0" Jan 30 05:30:48 crc kubenswrapper[4931]: I0130 05:30:48.031546 4931 scope.go:117] "RemoveContainer" containerID="cf669d89126cd05876fe2026bdc44224135e63c9e8ec5899f87342a850974a32" Jan 30 05:30:48 crc kubenswrapper[4931]: I0130 05:30:48.086766 4931 scope.go:117] "RemoveContainer" containerID="3eb355680179efbcbb2cf73e83f9b34f38755a348dc73a0a8db4b58a9c1de2f1" Jan 30 05:30:48 crc kubenswrapper[4931]: I0130 05:30:48.124953 4931 scope.go:117] "RemoveContainer" containerID="36f1f59d90f7e1367de837bd2375e2c11d0df21e5687e4d77b474faff3e8df0b" Jan 30 05:30:48 crc kubenswrapper[4931]: I0130 05:30:48.155222 4931 scope.go:117] "RemoveContainer" containerID="2951358824ae5ca54f437c7afd5ea7478602f9317a7330914d36e2cd66c684f6" Jan 30 05:30:48 crc kubenswrapper[4931]: I0130 05:30:48.185649 4931 scope.go:117] "RemoveContainer" containerID="1c39e215a63df32503630d9061f8096755ed069b7c32eec93d18140193ca977b" Jan 30 05:30:48 crc kubenswrapper[4931]: I0130 05:30:48.244495 4931 scope.go:117] "RemoveContainer" containerID="ea087fa247f921525edceed36a509f1f8a642fc4da20a1c681c64a7843494e21" Jan 30 05:31:48 crc kubenswrapper[4931]: I0130 05:31:48.855056 4931 scope.go:117] "RemoveContainer" containerID="a64e91cbe33af673e6689e436885784e9c445a56b737d4748cfcdbf6fce27a53" Jan 30 05:31:48 crc kubenswrapper[4931]: I0130 05:31:48.892021 4931 scope.go:117] "RemoveContainer" containerID="0aa30d8d9eae66f63b97cadd6e1c8c0a9f5fe5356f82b3165d21d6b90e8f054f" Jan 30 05:31:48 crc kubenswrapper[4931]: I0130 05:31:48.939628 4931 scope.go:117] "RemoveContainer" containerID="6a0d3a37541dd8bff3edd7d0762f4af19258be892f0b28a3ee8ffd644ba91460" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.004808 4931 scope.go:117] "RemoveContainer" containerID="2de911fa734d3f7bf71674e62b4beae90797f33e1cefb2483c1ee516fdc3ab44" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.030307 4931 scope.go:117] "RemoveContainer" containerID="397e61b0c2ae3421e485ee4187f967d9302d0416b0227f42b0fd6c0769496dc7" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.053870 4931 scope.go:117] "RemoveContainer" containerID="98322602699d7d942a6271f0a7fc74a73af1b5a299d4d538e2ee24bc7375a406" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.082705 4931 scope.go:117] "RemoveContainer" containerID="572f09b29f02dcc488a3f5a5c3037d927c6d17a2fec69ebb6da1e983b4bf5d1f" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.111554 4931 scope.go:117] "RemoveContainer" containerID="94fc6d9869d9820d8c965d9ddc61b4a6003c2bcfb528dd4f82ab1c383ce5be01" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.135519 4931 scope.go:117] "RemoveContainer" containerID="72e98c8676f758af58c2fffef7c54cd9bedf5ae4210e865b9220280e84a05578" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.164167 4931 scope.go:117] "RemoveContainer" containerID="b7fd522240b80788d80f7919145a4aa75ecf42cdb18b9fd6434f7a190f674261" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.194327 4931 scope.go:117] "RemoveContainer" containerID="cd7b183110cc89e2b8163b597d3085517a11ba4e30ab68470086c415c808b949" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.218632 4931 scope.go:117] "RemoveContainer" containerID="31798e9f13d46b8721aae715c1edfd7a01d30cecc4d59728bf20993fd26d459b" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.258787 4931 scope.go:117] "RemoveContainer" containerID="f8a2b41856adf7471c684772afc9b12f445fbd24f6ab5036ce18fde6331c17d4" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.290788 4931 scope.go:117] "RemoveContainer" containerID="1140a7961d708d05c85bc33a569a12461dd710e3403faa5dc7621241292e7e99" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.332965 4931 scope.go:117] "RemoveContainer" containerID="cf97da0cb0eda4f19afadca8bff99c148f7d4875216c60c7fab1145ddf6c9ea3" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.366600 4931 scope.go:117] "RemoveContainer" containerID="342d9a5530d63ffbf73ce7c8e40d07fb36a0b7d11a82e475df103d0dfda95398" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.419226 4931 scope.go:117] "RemoveContainer" containerID="e65d7d2b5f976da6a48bf573c615d7b8b7b4da4391bf0bdecb5b42aeee5717fb" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.460714 4931 scope.go:117] "RemoveContainer" containerID="b62af9a31208f4045d6ab5fc627a9d3f9b63bc460555779073074656653065f9" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.494665 4931 scope.go:117] "RemoveContainer" containerID="2f4a9744870428aceb547b2acdf130704ce1aaa6370e1105462da3c72da4e168" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.526317 4931 scope.go:117] "RemoveContainer" containerID="424ff0eefa4783d3488bc19f3934cfec69b31ed4d156eca267b961eb0d363be6" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.557512 4931 scope.go:117] "RemoveContainer" containerID="43be7b60c2a00736cc6eb1df08fb2617062a5a1b05069e0aa41b60294b71b16c" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.607056 4931 scope.go:117] "RemoveContainer" containerID="15692c1b35f8b38884128c50d64f7fe3e0155bf28a22d7aceb44dcecc3b74210" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.631637 4931 scope.go:117] "RemoveContainer" containerID="1a58b66910dccd0a3d3aecf3a69cc3be05007daec35ed4f4da6ecaf7deb3050f" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.661679 4931 scope.go:117] "RemoveContainer" containerID="508e6e3003e86ee32a2b32dcec684271942a70c7d32070551d5605127eb8d9ad" Jan 30 05:31:49 crc kubenswrapper[4931]: I0130 05:31:49.710858 4931 scope.go:117] "RemoveContainer" containerID="dea51d6ee685a2470eaa0864347990ea744cedc85d00846568c67d56ba221ee1" Jan 30 05:31:57 crc kubenswrapper[4931]: I0130 05:31:57.362938 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:31:57 crc kubenswrapper[4931]: I0130 05:31:57.366305 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.387056 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bngkw"] Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390586 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-reaper" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390612 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-reaper" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390635 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-api" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390649 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-api" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390663 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390676 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390695 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-updater" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390707 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-updater" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390729 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="swift-recon-cron" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390740 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="swift-recon-cron" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390756 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390767 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-server" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390785 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="rsync" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390796 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="rsync" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390816 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390828 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390844 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390855 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-server" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390880 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-httpd" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390891 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-httpd" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390909 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390921 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390943 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker-log" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390960 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker-log" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.390980 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener-log" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.390997 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener-log" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391021 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a19500-eb44-455f-a8b7-7ee5375b87ef" containerName="collect-profiles" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391038 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a19500-eb44-455f-a8b7-7ee5375b87ef" containerName="collect-profiles" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391064 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391082 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391107 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391125 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391159 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391176 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391201 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391217 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391239 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391254 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-server" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391276 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391290 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391316 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-expirer" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391332 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-expirer" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391356 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391371 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391389 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-updater" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391404 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-updater" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391463 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391480 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: E0130 05:32:02.391510 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server-init" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391527 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server-init" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391854 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="623f3c8f-d741-4ba4-baca-905a13102f38" containerName="mariadb-account-create-update" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391912 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391939 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391964 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.391982 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker-log" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392001 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-expirer" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392030 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-updater" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392051 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-api" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392079 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392104 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="swift-recon-cron" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392144 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovs-vswitchd" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392181 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392206 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-replicator" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392225 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="rsync" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392252 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="object-auditor" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392271 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-reaper" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392293 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-updater" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392314 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener-log" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392334 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="container-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392354 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c7aeee-9023-433a-83d0-aa0e9942a0ed" containerName="barbican-worker" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392377 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f9790c-c395-4c72-b569-3140f703b56f" containerName="neutron-httpd" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392393 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac55021-a07e-443f-9ee9-e7516556b975" containerName="barbican-keystone-listener" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392415 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5732e34e-6330-4a36-9082-dbb50eede9f2" containerName="ovsdb-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392474 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="52577244-c181-4919-b5b0-040e229163db" containerName="account-server" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.392489 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a19500-eb44-455f-a8b7-7ee5375b87ef" containerName="collect-profiles" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.394153 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.421447 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bngkw"] Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.467178 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-catalog-content\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.467276 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjwq\" (UniqueName: \"kubernetes.io/projected/567bd9dc-af96-410a-afd9-bda3e473d9af-kube-api-access-4fjwq\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.467612 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-utilities\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.569719 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-catalog-content\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.569832 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjwq\" (UniqueName: \"kubernetes.io/projected/567bd9dc-af96-410a-afd9-bda3e473d9af-kube-api-access-4fjwq\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.570010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-utilities\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.570670 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-catalog-content\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.570835 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-utilities\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.611057 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjwq\" (UniqueName: \"kubernetes.io/projected/567bd9dc-af96-410a-afd9-bda3e473d9af-kube-api-access-4fjwq\") pod \"redhat-operators-bngkw\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.729804 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:02 crc kubenswrapper[4931]: I0130 05:32:02.997664 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bngkw"] Jan 30 05:32:03 crc kubenswrapper[4931]: I0130 05:32:03.512991 4931 generic.go:334] "Generic (PLEG): container finished" podID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerID="62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db" exitCode=0 Jan 30 05:32:03 crc kubenswrapper[4931]: I0130 05:32:03.513107 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bngkw" event={"ID":"567bd9dc-af96-410a-afd9-bda3e473d9af","Type":"ContainerDied","Data":"62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db"} Jan 30 05:32:03 crc kubenswrapper[4931]: I0130 05:32:03.513262 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bngkw" event={"ID":"567bd9dc-af96-410a-afd9-bda3e473d9af","Type":"ContainerStarted","Data":"bfa56798d790bb83e1e8dd951ca707c64a2ef5d7129037e48b02eafae5e0e48f"} Jan 30 05:32:03 crc kubenswrapper[4931]: I0130 05:32:03.514697 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 05:32:04 crc kubenswrapper[4931]: I0130 05:32:04.531654 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bngkw" event={"ID":"567bd9dc-af96-410a-afd9-bda3e473d9af","Type":"ContainerStarted","Data":"850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26"} Jan 30 05:32:05 crc kubenswrapper[4931]: I0130 05:32:05.546217 4931 generic.go:334] "Generic (PLEG): container finished" podID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerID="850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26" exitCode=0 Jan 30 05:32:05 crc kubenswrapper[4931]: I0130 05:32:05.546275 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bngkw" event={"ID":"567bd9dc-af96-410a-afd9-bda3e473d9af","Type":"ContainerDied","Data":"850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26"} Jan 30 05:32:06 crc kubenswrapper[4931]: I0130 05:32:06.560636 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bngkw" event={"ID":"567bd9dc-af96-410a-afd9-bda3e473d9af","Type":"ContainerStarted","Data":"7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c"} Jan 30 05:32:06 crc kubenswrapper[4931]: I0130 05:32:06.590253 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bngkw" podStartSLOduration=2.141310454 podStartE2EDuration="4.590087053s" podCreationTimestamp="2026-01-30 05:32:02 +0000 UTC" firstStartedPulling="2026-01-30 05:32:03.514381555 +0000 UTC m=+1458.884291822" lastFinishedPulling="2026-01-30 05:32:05.963158124 +0000 UTC m=+1461.333068421" observedRunningTime="2026-01-30 05:32:06.588004399 +0000 UTC m=+1461.957914706" watchObservedRunningTime="2026-01-30 05:32:06.590087053 +0000 UTC m=+1461.959997350" Jan 30 05:32:12 crc kubenswrapper[4931]: I0130 05:32:12.730142 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:12 crc kubenswrapper[4931]: I0130 05:32:12.730616 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:13 crc kubenswrapper[4931]: I0130 05:32:13.777902 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bngkw" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="registry-server" probeResult="failure" output=< Jan 30 05:32:13 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 05:32:13 crc kubenswrapper[4931]: > Jan 30 05:32:22 crc kubenswrapper[4931]: I0130 05:32:22.812899 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:22 crc kubenswrapper[4931]: I0130 05:32:22.906328 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:23 crc kubenswrapper[4931]: I0130 05:32:23.072103 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bngkw"] Jan 30 05:32:24 crc kubenswrapper[4931]: I0130 05:32:24.753454 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bngkw" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="registry-server" containerID="cri-o://7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c" gracePeriod=2 Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.287209 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.348716 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-catalog-content\") pod \"567bd9dc-af96-410a-afd9-bda3e473d9af\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.348801 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-utilities\") pod \"567bd9dc-af96-410a-afd9-bda3e473d9af\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.348920 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjwq\" (UniqueName: \"kubernetes.io/projected/567bd9dc-af96-410a-afd9-bda3e473d9af-kube-api-access-4fjwq\") pod \"567bd9dc-af96-410a-afd9-bda3e473d9af\" (UID: \"567bd9dc-af96-410a-afd9-bda3e473d9af\") " Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.352266 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-utilities" (OuterVolumeSpecName: "utilities") pod "567bd9dc-af96-410a-afd9-bda3e473d9af" (UID: "567bd9dc-af96-410a-afd9-bda3e473d9af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.357044 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567bd9dc-af96-410a-afd9-bda3e473d9af-kube-api-access-4fjwq" (OuterVolumeSpecName: "kube-api-access-4fjwq") pod "567bd9dc-af96-410a-afd9-bda3e473d9af" (UID: "567bd9dc-af96-410a-afd9-bda3e473d9af"). InnerVolumeSpecName "kube-api-access-4fjwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.451065 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.451104 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjwq\" (UniqueName: \"kubernetes.io/projected/567bd9dc-af96-410a-afd9-bda3e473d9af-kube-api-access-4fjwq\") on node \"crc\" DevicePath \"\"" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.528168 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "567bd9dc-af96-410a-afd9-bda3e473d9af" (UID: "567bd9dc-af96-410a-afd9-bda3e473d9af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.552554 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/567bd9dc-af96-410a-afd9-bda3e473d9af-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.774205 4931 generic.go:334] "Generic (PLEG): container finished" podID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerID="7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c" exitCode=0 Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.774267 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bngkw" event={"ID":"567bd9dc-af96-410a-afd9-bda3e473d9af","Type":"ContainerDied","Data":"7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c"} Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.774309 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bngkw" event={"ID":"567bd9dc-af96-410a-afd9-bda3e473d9af","Type":"ContainerDied","Data":"bfa56798d790bb83e1e8dd951ca707c64a2ef5d7129037e48b02eafae5e0e48f"} Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.774341 4931 scope.go:117] "RemoveContainer" containerID="7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.774583 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bngkw" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.810289 4931 scope.go:117] "RemoveContainer" containerID="850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.834983 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bngkw"] Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.846403 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bngkw"] Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.852334 4931 scope.go:117] "RemoveContainer" containerID="62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.883369 4931 scope.go:117] "RemoveContainer" containerID="7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c" Jan 30 05:32:25 crc kubenswrapper[4931]: E0130 05:32:25.884049 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c\": container with ID starting with 7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c not found: ID does not exist" containerID="7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.884138 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c"} err="failed to get container status \"7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c\": rpc error: code = NotFound desc = could not find container \"7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c\": container with ID starting with 7b64b470a99e0f0df4935c5100bf4ee1e8fdeb10b1a5502d4d1d1b75be9b755c not found: ID does not exist" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.884178 4931 scope.go:117] "RemoveContainer" containerID="850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26" Jan 30 05:32:25 crc kubenswrapper[4931]: E0130 05:32:25.884819 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26\": container with ID starting with 850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26 not found: ID does not exist" containerID="850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.884856 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26"} err="failed to get container status \"850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26\": rpc error: code = NotFound desc = could not find container \"850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26\": container with ID starting with 850b640765f16d3f47d1e2cd35d91b07397292fd55c8906b2e0325eaaf41ca26 not found: ID does not exist" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.884888 4931 scope.go:117] "RemoveContainer" containerID="62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db" Jan 30 05:32:25 crc kubenswrapper[4931]: E0130 05:32:25.885471 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db\": container with ID starting with 62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db not found: ID does not exist" containerID="62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db" Jan 30 05:32:25 crc kubenswrapper[4931]: I0130 05:32:25.885535 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db"} err="failed to get container status \"62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db\": rpc error: code = NotFound desc = could not find container \"62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db\": container with ID starting with 62862ffef7443966c3af316e55c80f6a491b595913973cd1336120ed4addd5db not found: ID does not exist" Jan 30 05:32:27 crc kubenswrapper[4931]: I0130 05:32:27.363194 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:32:27 crc kubenswrapper[4931]: I0130 05:32:27.363593 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:32:27 crc kubenswrapper[4931]: I0130 05:32:27.434491 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" path="/var/lib/kubelet/pods/567bd9dc-af96-410a-afd9-bda3e473d9af/volumes" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.690241 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pdcrd"] Jan 30 05:32:34 crc kubenswrapper[4931]: E0130 05:32:34.691152 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="extract-utilities" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.691167 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="extract-utilities" Jan 30 05:32:34 crc kubenswrapper[4931]: E0130 05:32:34.691178 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="extract-content" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.691187 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="extract-content" Jan 30 05:32:34 crc kubenswrapper[4931]: E0130 05:32:34.691220 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="registry-server" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.691229 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="registry-server" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.691402 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="567bd9dc-af96-410a-afd9-bda3e473d9af" containerName="registry-server" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.692597 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.716145 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdcrd"] Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.797634 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-catalog-content\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.797726 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-utilities\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.797773 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfjx\" (UniqueName: \"kubernetes.io/projected/c18016f0-c17f-4cc9-ada3-70547fdd56d5-kube-api-access-bzfjx\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.899117 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-utilities\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.899204 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfjx\" (UniqueName: \"kubernetes.io/projected/c18016f0-c17f-4cc9-ada3-70547fdd56d5-kube-api-access-bzfjx\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.899251 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-catalog-content\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.899866 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-catalog-content\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.899882 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-utilities\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:34 crc kubenswrapper[4931]: I0130 05:32:34.918030 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfjx\" (UniqueName: \"kubernetes.io/projected/c18016f0-c17f-4cc9-ada3-70547fdd56d5-kube-api-access-bzfjx\") pod \"community-operators-pdcrd\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:35 crc kubenswrapper[4931]: I0130 05:32:35.024608 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:35 crc kubenswrapper[4931]: I0130 05:32:35.557748 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdcrd"] Jan 30 05:32:35 crc kubenswrapper[4931]: I0130 05:32:35.890353 4931 generic.go:334] "Generic (PLEG): container finished" podID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerID="7bebd804e544da219b011d2d1afda89cddb0ce0335d15b4c02c03698505a49a5" exitCode=0 Jan 30 05:32:35 crc kubenswrapper[4931]: I0130 05:32:35.890409 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdcrd" event={"ID":"c18016f0-c17f-4cc9-ada3-70547fdd56d5","Type":"ContainerDied","Data":"7bebd804e544da219b011d2d1afda89cddb0ce0335d15b4c02c03698505a49a5"} Jan 30 05:32:35 crc kubenswrapper[4931]: I0130 05:32:35.890480 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdcrd" event={"ID":"c18016f0-c17f-4cc9-ada3-70547fdd56d5","Type":"ContainerStarted","Data":"0d39ebb93c7c1261d2ef3495adffe2a9945e07bb2fddb670b37115156b3edff5"} Jan 30 05:32:36 crc kubenswrapper[4931]: I0130 05:32:36.900710 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdcrd" event={"ID":"c18016f0-c17f-4cc9-ada3-70547fdd56d5","Type":"ContainerStarted","Data":"9f887e3a8d3842c13ac5dc51c372e65ef00a63929109cc6e6a2c9b0bb12255ff"} Jan 30 05:32:37 crc kubenswrapper[4931]: I0130 05:32:37.914795 4931 generic.go:334] "Generic (PLEG): container finished" podID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerID="9f887e3a8d3842c13ac5dc51c372e65ef00a63929109cc6e6a2c9b0bb12255ff" exitCode=0 Jan 30 05:32:37 crc kubenswrapper[4931]: I0130 05:32:37.914873 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdcrd" event={"ID":"c18016f0-c17f-4cc9-ada3-70547fdd56d5","Type":"ContainerDied","Data":"9f887e3a8d3842c13ac5dc51c372e65ef00a63929109cc6e6a2c9b0bb12255ff"} Jan 30 05:32:38 crc kubenswrapper[4931]: I0130 05:32:38.924667 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdcrd" event={"ID":"c18016f0-c17f-4cc9-ada3-70547fdd56d5","Type":"ContainerStarted","Data":"8f836158176bf8715147188f37ab5cf8a46459cebe0bf751eef704d908bdceed"} Jan 30 05:32:38 crc kubenswrapper[4931]: I0130 05:32:38.947498 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pdcrd" podStartSLOduration=2.523688635 podStartE2EDuration="4.947477815s" podCreationTimestamp="2026-01-30 05:32:34 +0000 UTC" firstStartedPulling="2026-01-30 05:32:35.892588038 +0000 UTC m=+1491.262498325" lastFinishedPulling="2026-01-30 05:32:38.316377208 +0000 UTC m=+1493.686287505" observedRunningTime="2026-01-30 05:32:38.945605493 +0000 UTC m=+1494.315515760" watchObservedRunningTime="2026-01-30 05:32:38.947477815 +0000 UTC m=+1494.317388082" Jan 30 05:32:45 crc kubenswrapper[4931]: I0130 05:32:45.025360 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:45 crc kubenswrapper[4931]: I0130 05:32:45.025776 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:45 crc kubenswrapper[4931]: I0130 05:32:45.096906 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:46 crc kubenswrapper[4931]: I0130 05:32:46.059037 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:46 crc kubenswrapper[4931]: I0130 05:32:46.141026 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdcrd"] Jan 30 05:32:48 crc kubenswrapper[4931]: I0130 05:32:48.017973 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pdcrd" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="registry-server" containerID="cri-o://8f836158176bf8715147188f37ab5cf8a46459cebe0bf751eef704d908bdceed" gracePeriod=2 Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.025340 4931 generic.go:334] "Generic (PLEG): container finished" podID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerID="8f836158176bf8715147188f37ab5cf8a46459cebe0bf751eef704d908bdceed" exitCode=0 Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.025398 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdcrd" event={"ID":"c18016f0-c17f-4cc9-ada3-70547fdd56d5","Type":"ContainerDied","Data":"8f836158176bf8715147188f37ab5cf8a46459cebe0bf751eef704d908bdceed"} Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.308845 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.500935 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-catalog-content\") pod \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.501611 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-utilities\") pod \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.502054 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzfjx\" (UniqueName: \"kubernetes.io/projected/c18016f0-c17f-4cc9-ada3-70547fdd56d5-kube-api-access-bzfjx\") pod \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\" (UID: \"c18016f0-c17f-4cc9-ada3-70547fdd56d5\") " Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.504108 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-utilities" (OuterVolumeSpecName: "utilities") pod "c18016f0-c17f-4cc9-ada3-70547fdd56d5" (UID: "c18016f0-c17f-4cc9-ada3-70547fdd56d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.511499 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18016f0-c17f-4cc9-ada3-70547fdd56d5-kube-api-access-bzfjx" (OuterVolumeSpecName: "kube-api-access-bzfjx") pod "c18016f0-c17f-4cc9-ada3-70547fdd56d5" (UID: "c18016f0-c17f-4cc9-ada3-70547fdd56d5"). InnerVolumeSpecName "kube-api-access-bzfjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.555691 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c18016f0-c17f-4cc9-ada3-70547fdd56d5" (UID: "c18016f0-c17f-4cc9-ada3-70547fdd56d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.603911 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.603952 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzfjx\" (UniqueName: \"kubernetes.io/projected/c18016f0-c17f-4cc9-ada3-70547fdd56d5-kube-api-access-bzfjx\") on node \"crc\" DevicePath \"\"" Jan 30 05:32:49 crc kubenswrapper[4931]: I0130 05:32:49.603970 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c18016f0-c17f-4cc9-ada3-70547fdd56d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.041817 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdcrd" event={"ID":"c18016f0-c17f-4cc9-ada3-70547fdd56d5","Type":"ContainerDied","Data":"0d39ebb93c7c1261d2ef3495adffe2a9945e07bb2fddb670b37115156b3edff5"} Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.041895 4931 scope.go:117] "RemoveContainer" containerID="8f836158176bf8715147188f37ab5cf8a46459cebe0bf751eef704d908bdceed" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.042118 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdcrd" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.066104 4931 scope.go:117] "RemoveContainer" containerID="9f887e3a8d3842c13ac5dc51c372e65ef00a63929109cc6e6a2c9b0bb12255ff" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.091662 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdcrd"] Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.102351 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pdcrd"] Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.117452 4931 scope.go:117] "RemoveContainer" containerID="7bebd804e544da219b011d2d1afda89cddb0ce0335d15b4c02c03698505a49a5" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.170154 4931 scope.go:117] "RemoveContainer" containerID="1ee4814b304dc2facdec4fe5a7ec548e21648d29d6f5eb9a0d58da2eecb4e24b" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.193271 4931 scope.go:117] "RemoveContainer" containerID="1e41b0f0000283bd1a29c28f4d8fdb74fdd5389e3ebd8804eac1db1375b10248" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.230198 4931 scope.go:117] "RemoveContainer" containerID="136922e5f994f3ec703ee0b76647539238ec38bef505c93a23f26dc8f73ef24d" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.259288 4931 scope.go:117] "RemoveContainer" containerID="703277ac00408ad9f7d1f58fc77ac68f5eef4a1090a051f9ca88ddf484b5fda4" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.318914 4931 scope.go:117] "RemoveContainer" containerID="00f6a2dd44878296bf4733164be83ce28b802aa8c1f8943860365c668511c527" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.346893 4931 scope.go:117] "RemoveContainer" containerID="d39f6f20169ccd7e0eea2f20181dc418bff8322ca21fcf82d4f5c3d022992a6f" Jan 30 05:32:50 crc kubenswrapper[4931]: I0130 05:32:50.396934 4931 scope.go:117] "RemoveContainer" containerID="5737a395d9d729d9146705b5cbd342fcc1f4ff9a1712777cd6b02a06ccdce9e4" Jan 30 05:32:51 crc kubenswrapper[4931]: I0130 05:32:51.440057 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" path="/var/lib/kubelet/pods/c18016f0-c17f-4cc9-ada3-70547fdd56d5/volumes" Jan 30 05:32:57 crc kubenswrapper[4931]: I0130 05:32:57.362904 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:32:57 crc kubenswrapper[4931]: I0130 05:32:57.364539 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:32:57 crc kubenswrapper[4931]: I0130 05:32:57.364631 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:32:57 crc kubenswrapper[4931]: I0130 05:32:57.365561 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:32:57 crc kubenswrapper[4931]: I0130 05:32:57.365666 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" gracePeriod=600 Jan 30 05:32:57 crc kubenswrapper[4931]: E0130 05:32:57.510395 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:32:58 crc kubenswrapper[4931]: I0130 05:32:58.139510 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" exitCode=0 Jan 30 05:32:58 crc kubenswrapper[4931]: I0130 05:32:58.139601 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0"} Jan 30 05:32:58 crc kubenswrapper[4931]: I0130 05:32:58.139665 4931 scope.go:117] "RemoveContainer" containerID="083c2726f719c1b6c228fc0d209a309a403985263c1ced3ea0982529442fd973" Jan 30 05:32:58 crc kubenswrapper[4931]: I0130 05:32:58.140634 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:32:58 crc kubenswrapper[4931]: E0130 05:32:58.141476 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.466127 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kdmvh"] Jan 30 05:33:05 crc kubenswrapper[4931]: E0130 05:33:05.468402 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="extract-utilities" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.468478 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="extract-utilities" Jan 30 05:33:05 crc kubenswrapper[4931]: E0130 05:33:05.468513 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="registry-server" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.468533 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="registry-server" Jan 30 05:33:05 crc kubenswrapper[4931]: E0130 05:33:05.468562 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="extract-content" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.468578 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="extract-content" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.468949 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18016f0-c17f-4cc9-ada3-70547fdd56d5" containerName="registry-server" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.470817 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.498820 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdmvh"] Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.668766 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-catalog-content\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.668862 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-utilities\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.668936 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pcmx\" (UniqueName: \"kubernetes.io/projected/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-kube-api-access-8pcmx\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.770705 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-catalog-content\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.770794 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-utilities\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.770849 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pcmx\" (UniqueName: \"kubernetes.io/projected/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-kube-api-access-8pcmx\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.771237 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-catalog-content\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.771592 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-utilities\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.797182 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pcmx\" (UniqueName: \"kubernetes.io/projected/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-kube-api-access-8pcmx\") pod \"certified-operators-kdmvh\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:05 crc kubenswrapper[4931]: I0130 05:33:05.810583 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:06 crc kubenswrapper[4931]: I0130 05:33:06.291885 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kdmvh"] Jan 30 05:33:07 crc kubenswrapper[4931]: I0130 05:33:07.235106 4931 generic.go:334] "Generic (PLEG): container finished" podID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerID="ac8579f185e9d5a31fb735e240a7db9474963e4a3fc8a2610621483b60d98f53" exitCode=0 Jan 30 05:33:07 crc kubenswrapper[4931]: I0130 05:33:07.235152 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdmvh" event={"ID":"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d","Type":"ContainerDied","Data":"ac8579f185e9d5a31fb735e240a7db9474963e4a3fc8a2610621483b60d98f53"} Jan 30 05:33:07 crc kubenswrapper[4931]: I0130 05:33:07.235182 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdmvh" event={"ID":"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d","Type":"ContainerStarted","Data":"a3a8b0b1662721461bf6495f03be27556288bb0c9d87899aed8cd07bec3d290d"} Jan 30 05:33:08 crc kubenswrapper[4931]: I0130 05:33:08.272788 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdmvh" event={"ID":"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d","Type":"ContainerStarted","Data":"d1c237bd63c74b3077c6a07d297620056ee6292b8cbe29e75c0b1d8fe17f2d15"} Jan 30 05:33:09 crc kubenswrapper[4931]: I0130 05:33:09.287387 4931 generic.go:334] "Generic (PLEG): container finished" podID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerID="d1c237bd63c74b3077c6a07d297620056ee6292b8cbe29e75c0b1d8fe17f2d15" exitCode=0 Jan 30 05:33:09 crc kubenswrapper[4931]: I0130 05:33:09.287446 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdmvh" event={"ID":"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d","Type":"ContainerDied","Data":"d1c237bd63c74b3077c6a07d297620056ee6292b8cbe29e75c0b1d8fe17f2d15"} Jan 30 05:33:09 crc kubenswrapper[4931]: I0130 05:33:09.423639 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:33:09 crc kubenswrapper[4931]: E0130 05:33:09.424510 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:33:10 crc kubenswrapper[4931]: I0130 05:33:10.300438 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdmvh" event={"ID":"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d","Type":"ContainerStarted","Data":"ae533e994fe609ff12ac93cc07298624016e31c68088cbe01149f6798c73cc4c"} Jan 30 05:33:10 crc kubenswrapper[4931]: I0130 05:33:10.326097 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kdmvh" podStartSLOduration=2.761214968 podStartE2EDuration="5.32606879s" podCreationTimestamp="2026-01-30 05:33:05 +0000 UTC" firstStartedPulling="2026-01-30 05:33:07.237746704 +0000 UTC m=+1522.607656971" lastFinishedPulling="2026-01-30 05:33:09.802600506 +0000 UTC m=+1525.172510793" observedRunningTime="2026-01-30 05:33:10.319499697 +0000 UTC m=+1525.689409984" watchObservedRunningTime="2026-01-30 05:33:10.32606879 +0000 UTC m=+1525.695979087" Jan 30 05:33:15 crc kubenswrapper[4931]: I0130 05:33:15.810890 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:15 crc kubenswrapper[4931]: I0130 05:33:15.811279 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:15 crc kubenswrapper[4931]: I0130 05:33:15.901258 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:16 crc kubenswrapper[4931]: I0130 05:33:16.391356 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:16 crc kubenswrapper[4931]: I0130 05:33:16.445117 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdmvh"] Jan 30 05:33:18 crc kubenswrapper[4931]: I0130 05:33:18.367853 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kdmvh" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="registry-server" containerID="cri-o://ae533e994fe609ff12ac93cc07298624016e31c68088cbe01149f6798c73cc4c" gracePeriod=2 Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.381499 4931 generic.go:334] "Generic (PLEG): container finished" podID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerID="ae533e994fe609ff12ac93cc07298624016e31c68088cbe01149f6798c73cc4c" exitCode=0 Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.381630 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdmvh" event={"ID":"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d","Type":"ContainerDied","Data":"ae533e994fe609ff12ac93cc07298624016e31c68088cbe01149f6798c73cc4c"} Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.381968 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kdmvh" event={"ID":"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d","Type":"ContainerDied","Data":"a3a8b0b1662721461bf6495f03be27556288bb0c9d87899aed8cd07bec3d290d"} Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.381992 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3a8b0b1662721461bf6495f03be27556288bb0c9d87899aed8cd07bec3d290d" Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.434943 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.600605 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-utilities\") pod \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.600773 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pcmx\" (UniqueName: \"kubernetes.io/projected/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-kube-api-access-8pcmx\") pod \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.600959 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-catalog-content\") pod \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\" (UID: \"9f9d5f6a-c304-4ef4-aebb-9f346e7f786d\") " Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.602391 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-utilities" (OuterVolumeSpecName: "utilities") pod "9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" (UID: "9f9d5f6a-c304-4ef4-aebb-9f346e7f786d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.617023 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-kube-api-access-8pcmx" (OuterVolumeSpecName: "kube-api-access-8pcmx") pod "9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" (UID: "9f9d5f6a-c304-4ef4-aebb-9f346e7f786d"). InnerVolumeSpecName "kube-api-access-8pcmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.690697 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" (UID: "9f9d5f6a-c304-4ef4-aebb-9f346e7f786d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.703671 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.703710 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pcmx\" (UniqueName: \"kubernetes.io/projected/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-kube-api-access-8pcmx\") on node \"crc\" DevicePath \"\"" Jan 30 05:33:19 crc kubenswrapper[4931]: I0130 05:33:19.703723 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:33:20 crc kubenswrapper[4931]: I0130 05:33:20.391271 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kdmvh" Jan 30 05:33:20 crc kubenswrapper[4931]: I0130 05:33:20.440976 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kdmvh"] Jan 30 05:33:20 crc kubenswrapper[4931]: I0130 05:33:20.445921 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kdmvh"] Jan 30 05:33:21 crc kubenswrapper[4931]: I0130 05:33:21.436634 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" path="/var/lib/kubelet/pods/9f9d5f6a-c304-4ef4-aebb-9f346e7f786d/volumes" Jan 30 05:33:24 crc kubenswrapper[4931]: I0130 05:33:24.422582 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:33:24 crc kubenswrapper[4931]: E0130 05:33:24.424013 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:33:37 crc kubenswrapper[4931]: I0130 05:33:37.422892 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:33:37 crc kubenswrapper[4931]: E0130 05:33:37.423847 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:33:49 crc kubenswrapper[4931]: I0130 05:33:49.422815 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:33:49 crc kubenswrapper[4931]: E0130 05:33:49.423909 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.536363 4931 scope.go:117] "RemoveContainer" containerID="daeb4e60a2f2e8b0ecc5573dd48689c8e466dc66250fe49e905723d105d79613" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.570470 4931 scope.go:117] "RemoveContainer" containerID="6c4ebb40e4402e95e337ac0e8eea0a4fb903b22dbcfc5ac614853d0c17f24e3a" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.628944 4931 scope.go:117] "RemoveContainer" containerID="3baca7478354f0fb4066c265761c1fd4465b993347daea084eeb2d40cd40bed6" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.655810 4931 scope.go:117] "RemoveContainer" containerID="c948d726013eb4e8273ef998118172023cae6536ca99db77a7f1ebd4884def12" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.684201 4931 scope.go:117] "RemoveContainer" containerID="c2771265ae8a990e0e69c0f116c64cc25eecd94f6e185173fb3394e2e6fbe468" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.742717 4931 scope.go:117] "RemoveContainer" containerID="ed1045d9c4b634bdebcb19b30994b9f7ac39021883a8b98a833d09018502f440" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.788222 4931 scope.go:117] "RemoveContainer" containerID="9398f7e713fb447b3a151d286be2d2910e4d8535fd421e906c46b8cc2c9a4728" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.816757 4931 scope.go:117] "RemoveContainer" containerID="2823dcc09d156bc746ffbc3ab196c3d6e136f453bf377837d7cce10861a168f4" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.851098 4931 scope.go:117] "RemoveContainer" containerID="976d06480a8d07dd149684c2767dbf90e61f0fd7efbc4d623ba32e7d83fb861e" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.884182 4931 scope.go:117] "RemoveContainer" containerID="edf9b3d1d8428caf5db14c3063b00d649e4d886f974003048a406d3bcf0b7c43" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.927004 4931 scope.go:117] "RemoveContainer" containerID="571155fa2c4a4cc11bc78f96b7b5b636cdda183726d54338d2bf0cc02d77f003" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.957347 4931 scope.go:117] "RemoveContainer" containerID="83b746b64ed4eb6c60d1909a3bd54f4f030ff6949e754157b24384ceb5419c06" Jan 30 05:33:50 crc kubenswrapper[4931]: I0130 05:33:50.984605 4931 scope.go:117] "RemoveContainer" containerID="5712d27fd9c195ed4c35f4530c38c5e87c6a63708aedb0fa792d34d9e26a0b9a" Jan 30 05:33:51 crc kubenswrapper[4931]: I0130 05:33:51.016317 4931 scope.go:117] "RemoveContainer" containerID="02a426537f79889d684c812318ea1dd0bd0af03a098fb1e7d47cd94e43353e1c" Jan 30 05:34:02 crc kubenswrapper[4931]: I0130 05:34:02.422826 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:34:02 crc kubenswrapper[4931]: E0130 05:34:02.423802 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.216875 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hjwsc"] Jan 30 05:34:12 crc kubenswrapper[4931]: E0130 05:34:12.217642 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="extract-content" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.217657 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="extract-content" Jan 30 05:34:12 crc kubenswrapper[4931]: E0130 05:34:12.217668 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="registry-server" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.217676 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="registry-server" Jan 30 05:34:12 crc kubenswrapper[4931]: E0130 05:34:12.217706 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="extract-utilities" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.217715 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="extract-utilities" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.217896 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9d5f6a-c304-4ef4-aebb-9f346e7f786d" containerName="registry-server" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.218988 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.239388 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjwsc"] Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.240514 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-catalog-content\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.240760 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-utilities\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.240837 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smnf4\" (UniqueName: \"kubernetes.io/projected/b37cadec-51d4-44c5-bea0-fec0eec934a5-kube-api-access-smnf4\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.346034 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-utilities\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.346100 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smnf4\" (UniqueName: \"kubernetes.io/projected/b37cadec-51d4-44c5-bea0-fec0eec934a5-kube-api-access-smnf4\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.346276 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-catalog-content\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.346962 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-catalog-content\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.347978 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-utilities\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.374555 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smnf4\" (UniqueName: \"kubernetes.io/projected/b37cadec-51d4-44c5-bea0-fec0eec934a5-kube-api-access-smnf4\") pod \"redhat-marketplace-hjwsc\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.563394 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.818584 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjwsc"] Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.962943 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjwsc" event={"ID":"b37cadec-51d4-44c5-bea0-fec0eec934a5","Type":"ContainerStarted","Data":"91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2"} Jan 30 05:34:12 crc kubenswrapper[4931]: I0130 05:34:12.962984 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjwsc" event={"ID":"b37cadec-51d4-44c5-bea0-fec0eec934a5","Type":"ContainerStarted","Data":"baa1bec2ab7247b413a344955d2ac6f2777451917fd1e7ba7fd3c0f693e3f21b"} Jan 30 05:34:13 crc kubenswrapper[4931]: I0130 05:34:13.976731 4931 generic.go:334] "Generic (PLEG): container finished" podID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerID="91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2" exitCode=0 Jan 30 05:34:13 crc kubenswrapper[4931]: I0130 05:34:13.976803 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjwsc" event={"ID":"b37cadec-51d4-44c5-bea0-fec0eec934a5","Type":"ContainerDied","Data":"91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2"} Jan 30 05:34:14 crc kubenswrapper[4931]: I0130 05:34:14.422598 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:34:14 crc kubenswrapper[4931]: E0130 05:34:14.423564 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:34:14 crc kubenswrapper[4931]: I0130 05:34:14.990944 4931 generic.go:334] "Generic (PLEG): container finished" podID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerID="e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e" exitCode=0 Jan 30 05:34:14 crc kubenswrapper[4931]: I0130 05:34:14.991014 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjwsc" event={"ID":"b37cadec-51d4-44c5-bea0-fec0eec934a5","Type":"ContainerDied","Data":"e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e"} Jan 30 05:34:16 crc kubenswrapper[4931]: I0130 05:34:16.021681 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjwsc" event={"ID":"b37cadec-51d4-44c5-bea0-fec0eec934a5","Type":"ContainerStarted","Data":"21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a"} Jan 30 05:34:16 crc kubenswrapper[4931]: I0130 05:34:16.057559 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hjwsc" podStartSLOduration=2.593710662 podStartE2EDuration="4.057536932s" podCreationTimestamp="2026-01-30 05:34:12 +0000 UTC" firstStartedPulling="2026-01-30 05:34:13.979573375 +0000 UTC m=+1589.349483672" lastFinishedPulling="2026-01-30 05:34:15.443399645 +0000 UTC m=+1590.813309942" observedRunningTime="2026-01-30 05:34:16.054162358 +0000 UTC m=+1591.424072645" watchObservedRunningTime="2026-01-30 05:34:16.057536932 +0000 UTC m=+1591.427447209" Jan 30 05:34:22 crc kubenswrapper[4931]: I0130 05:34:22.564829 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:22 crc kubenswrapper[4931]: I0130 05:34:22.565526 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:22 crc kubenswrapper[4931]: I0130 05:34:22.640030 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:23 crc kubenswrapper[4931]: I0130 05:34:23.159352 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:23 crc kubenswrapper[4931]: I0130 05:34:23.233741 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjwsc"] Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.100287 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hjwsc" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="registry-server" containerID="cri-o://21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a" gracePeriod=2 Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.568291 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.667764 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-catalog-content\") pod \"b37cadec-51d4-44c5-bea0-fec0eec934a5\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.667824 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smnf4\" (UniqueName: \"kubernetes.io/projected/b37cadec-51d4-44c5-bea0-fec0eec934a5-kube-api-access-smnf4\") pod \"b37cadec-51d4-44c5-bea0-fec0eec934a5\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.667935 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-utilities\") pod \"b37cadec-51d4-44c5-bea0-fec0eec934a5\" (UID: \"b37cadec-51d4-44c5-bea0-fec0eec934a5\") " Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.679107 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-utilities" (OuterVolumeSpecName: "utilities") pod "b37cadec-51d4-44c5-bea0-fec0eec934a5" (UID: "b37cadec-51d4-44c5-bea0-fec0eec934a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.704630 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37cadec-51d4-44c5-bea0-fec0eec934a5-kube-api-access-smnf4" (OuterVolumeSpecName: "kube-api-access-smnf4") pod "b37cadec-51d4-44c5-bea0-fec0eec934a5" (UID: "b37cadec-51d4-44c5-bea0-fec0eec934a5"). InnerVolumeSpecName "kube-api-access-smnf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.712060 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b37cadec-51d4-44c5-bea0-fec0eec934a5" (UID: "b37cadec-51d4-44c5-bea0-fec0eec934a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.773962 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.774028 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smnf4\" (UniqueName: \"kubernetes.io/projected/b37cadec-51d4-44c5-bea0-fec0eec934a5-kube-api-access-smnf4\") on node \"crc\" DevicePath \"\"" Jan 30 05:34:25 crc kubenswrapper[4931]: I0130 05:34:25.774058 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37cadec-51d4-44c5-bea0-fec0eec934a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.115352 4931 generic.go:334] "Generic (PLEG): container finished" podID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerID="21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a" exitCode=0 Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.115414 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjwsc" event={"ID":"b37cadec-51d4-44c5-bea0-fec0eec934a5","Type":"ContainerDied","Data":"21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a"} Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.115485 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hjwsc" event={"ID":"b37cadec-51d4-44c5-bea0-fec0eec934a5","Type":"ContainerDied","Data":"baa1bec2ab7247b413a344955d2ac6f2777451917fd1e7ba7fd3c0f693e3f21b"} Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.115520 4931 scope.go:117] "RemoveContainer" containerID="21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.115672 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hjwsc" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.149670 4931 scope.go:117] "RemoveContainer" containerID="e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.177751 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjwsc"] Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.193090 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hjwsc"] Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.196476 4931 scope.go:117] "RemoveContainer" containerID="91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.228682 4931 scope.go:117] "RemoveContainer" containerID="21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a" Jan 30 05:34:26 crc kubenswrapper[4931]: E0130 05:34:26.233261 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a\": container with ID starting with 21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a not found: ID does not exist" containerID="21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.233345 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a"} err="failed to get container status \"21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a\": rpc error: code = NotFound desc = could not find container \"21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a\": container with ID starting with 21eb531868f45fbe6cad91e5b677c4767f86767d02e4c81b9388507e75b39c8a not found: ID does not exist" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.233396 4931 scope.go:117] "RemoveContainer" containerID="e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e" Jan 30 05:34:26 crc kubenswrapper[4931]: E0130 05:34:26.234004 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e\": container with ID starting with e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e not found: ID does not exist" containerID="e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.234049 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e"} err="failed to get container status \"e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e\": rpc error: code = NotFound desc = could not find container \"e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e\": container with ID starting with e2a32dfc112f9b116108a13a5d3f36ea9b3d3bc48e7e7684e30ebf5e91a98e3e not found: ID does not exist" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.234077 4931 scope.go:117] "RemoveContainer" containerID="91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2" Jan 30 05:34:26 crc kubenswrapper[4931]: E0130 05:34:26.234692 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2\": container with ID starting with 91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2 not found: ID does not exist" containerID="91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2" Jan 30 05:34:26 crc kubenswrapper[4931]: I0130 05:34:26.234761 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2"} err="failed to get container status \"91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2\": rpc error: code = NotFound desc = could not find container \"91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2\": container with ID starting with 91ca3a00bf52ac983475cb121e8ea3da1cb7835cb041680436c65b90998554f2 not found: ID does not exist" Jan 30 05:34:27 crc kubenswrapper[4931]: I0130 05:34:27.422914 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:34:27 crc kubenswrapper[4931]: E0130 05:34:27.423294 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:34:27 crc kubenswrapper[4931]: I0130 05:34:27.439704 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" path="/var/lib/kubelet/pods/b37cadec-51d4-44c5-bea0-fec0eec934a5/volumes" Jan 30 05:34:39 crc kubenswrapper[4931]: I0130 05:34:39.421468 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:34:39 crc kubenswrapper[4931]: E0130 05:34:39.422355 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:34:50 crc kubenswrapper[4931]: I0130 05:34:50.422791 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:34:50 crc kubenswrapper[4931]: E0130 05:34:50.424065 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.304017 4931 scope.go:117] "RemoveContainer" containerID="056aa11a16b72fe7fde4370093154af79d24b07c3142cb8943c78be2016d3fc6" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.350553 4931 scope.go:117] "RemoveContainer" containerID="346e5462e41c54c8f5c2422490f080d2b64f85c405ea5cc5337aa66fee775153" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.427592 4931 scope.go:117] "RemoveContainer" containerID="a729151ede12640ea81c41f5d7f2d36efd861e7a4d31b991fe42dd4d2139fbe2" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.458521 4931 scope.go:117] "RemoveContainer" containerID="0f6848e1ccd25c33da13cfce62f451555f794b35623ff124d320281a39cb9911" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.486838 4931 scope.go:117] "RemoveContainer" containerID="a32ca29963fb38a6014b8500b2c2495801c36c3f1563f62dcc7d71405aa5c328" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.547074 4931 scope.go:117] "RemoveContainer" containerID="62da5f526098b3b9f5437a81119156f87878963a1c44c026236d9b63e20bbac5" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.574939 4931 scope.go:117] "RemoveContainer" containerID="9186f065f875b33ba8c3817977c815fff7b67dc7404ddb06ea6a85fb64800755" Jan 30 05:34:51 crc kubenswrapper[4931]: I0130 05:34:51.604697 4931 scope.go:117] "RemoveContainer" containerID="6fe17572613dbd341b30bd762c1b5735b49c432308fe3f9a9ede6d5185282afe" Jan 30 05:35:03 crc kubenswrapper[4931]: I0130 05:35:03.422878 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:35:03 crc kubenswrapper[4931]: E0130 05:35:03.424107 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:35:14 crc kubenswrapper[4931]: I0130 05:35:14.422756 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:35:14 crc kubenswrapper[4931]: E0130 05:35:14.423827 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:35:25 crc kubenswrapper[4931]: I0130 05:35:25.428948 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:35:25 crc kubenswrapper[4931]: E0130 05:35:25.430069 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:35:36 crc kubenswrapper[4931]: I0130 05:35:36.422754 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:35:36 crc kubenswrapper[4931]: E0130 05:35:36.423878 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:35:47 crc kubenswrapper[4931]: I0130 05:35:47.422779 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:35:47 crc kubenswrapper[4931]: E0130 05:35:47.424029 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:35:51 crc kubenswrapper[4931]: I0130 05:35:51.768048 4931 scope.go:117] "RemoveContainer" containerID="25cc12087ab98d0fc79e679c4de5be61f557329f293a5a68393ba8b20a57c428" Jan 30 05:35:59 crc kubenswrapper[4931]: I0130 05:35:59.423059 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:35:59 crc kubenswrapper[4931]: E0130 05:35:59.423776 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:36:14 crc kubenswrapper[4931]: I0130 05:36:14.422657 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:36:14 crc kubenswrapper[4931]: E0130 05:36:14.423931 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:36:29 crc kubenswrapper[4931]: I0130 05:36:29.422393 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:36:29 crc kubenswrapper[4931]: E0130 05:36:29.423485 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:36:43 crc kubenswrapper[4931]: I0130 05:36:43.423494 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:36:43 crc kubenswrapper[4931]: E0130 05:36:43.424576 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:36:56 crc kubenswrapper[4931]: I0130 05:36:56.421831 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:36:56 crc kubenswrapper[4931]: E0130 05:36:56.422881 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:37:11 crc kubenswrapper[4931]: I0130 05:37:11.422929 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:37:11 crc kubenswrapper[4931]: E0130 05:37:11.424138 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:37:24 crc kubenswrapper[4931]: I0130 05:37:24.422323 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:37:24 crc kubenswrapper[4931]: E0130 05:37:24.422943 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:37:38 crc kubenswrapper[4931]: I0130 05:37:38.422091 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:37:38 crc kubenswrapper[4931]: E0130 05:37:38.423135 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:37:51 crc kubenswrapper[4931]: I0130 05:37:51.422060 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:37:51 crc kubenswrapper[4931]: E0130 05:37:51.423136 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:38:04 crc kubenswrapper[4931]: I0130 05:38:04.422113 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:38:05 crc kubenswrapper[4931]: I0130 05:38:05.265683 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"b0bb1a2bae55022b0965d85460d3f0d10d63b5551af2e94575d14b0dc028f44b"} Jan 30 05:39:51 crc kubenswrapper[4931]: I0130 05:39:51.888405 4931 scope.go:117] "RemoveContainer" containerID="ae533e994fe609ff12ac93cc07298624016e31c68088cbe01149f6798c73cc4c" Jan 30 05:39:51 crc kubenswrapper[4931]: I0130 05:39:51.934200 4931 scope.go:117] "RemoveContainer" containerID="d1c237bd63c74b3077c6a07d297620056ee6292b8cbe29e75c0b1d8fe17f2d15" Jan 30 05:39:51 crc kubenswrapper[4931]: I0130 05:39:51.968253 4931 scope.go:117] "RemoveContainer" containerID="ac8579f185e9d5a31fb735e240a7db9474963e4a3fc8a2610621483b60d98f53" Jan 30 05:40:27 crc kubenswrapper[4931]: I0130 05:40:27.363768 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:40:27 crc kubenswrapper[4931]: I0130 05:40:27.364282 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:40:57 crc kubenswrapper[4931]: I0130 05:40:57.363050 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:40:57 crc kubenswrapper[4931]: I0130 05:40:57.363676 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:41:27 crc kubenswrapper[4931]: I0130 05:41:27.362922 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:41:27 crc kubenswrapper[4931]: I0130 05:41:27.363690 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:41:27 crc kubenswrapper[4931]: I0130 05:41:27.363764 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:41:27 crc kubenswrapper[4931]: I0130 05:41:27.364692 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0bb1a2bae55022b0965d85460d3f0d10d63b5551af2e94575d14b0dc028f44b"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:41:27 crc kubenswrapper[4931]: I0130 05:41:27.364820 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://b0bb1a2bae55022b0965d85460d3f0d10d63b5551af2e94575d14b0dc028f44b" gracePeriod=600 Jan 30 05:41:28 crc kubenswrapper[4931]: I0130 05:41:28.190256 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="b0bb1a2bae55022b0965d85460d3f0d10d63b5551af2e94575d14b0dc028f44b" exitCode=0 Jan 30 05:41:28 crc kubenswrapper[4931]: I0130 05:41:28.190328 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"b0bb1a2bae55022b0965d85460d3f0d10d63b5551af2e94575d14b0dc028f44b"} Jan 30 05:41:28 crc kubenswrapper[4931]: I0130 05:41:28.190679 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4"} Jan 30 05:41:28 crc kubenswrapper[4931]: I0130 05:41:28.190713 4931 scope.go:117] "RemoveContainer" containerID="245db06188d68a7c7dde2f80f6002515bda5e496caa936666637ab4b8b52d5a0" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.272293 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7c2mf"] Jan 30 05:42:58 crc kubenswrapper[4931]: E0130 05:42:58.273287 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="extract-utilities" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.273308 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="extract-utilities" Jan 30 05:42:58 crc kubenswrapper[4931]: E0130 05:42:58.273341 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="registry-server" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.273357 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="registry-server" Jan 30 05:42:58 crc kubenswrapper[4931]: E0130 05:42:58.273380 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="extract-content" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.273393 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="extract-content" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.273673 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37cadec-51d4-44c5-bea0-fec0eec934a5" containerName="registry-server" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.275460 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.290347 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c2mf"] Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.382880 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9fv\" (UniqueName: \"kubernetes.io/projected/857de757-e591-4d21-8c09-df06fe672113-kube-api-access-rh9fv\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.383011 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-catalog-content\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.383068 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-utilities\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.484295 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh9fv\" (UniqueName: \"kubernetes.io/projected/857de757-e591-4d21-8c09-df06fe672113-kube-api-access-rh9fv\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.484450 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-catalog-content\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.484508 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-utilities\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.485331 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-catalog-content\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.485400 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-utilities\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.525482 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh9fv\" (UniqueName: \"kubernetes.io/projected/857de757-e591-4d21-8c09-df06fe672113-kube-api-access-rh9fv\") pod \"community-operators-7c2mf\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:58 crc kubenswrapper[4931]: I0130 05:42:58.608080 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:42:59 crc kubenswrapper[4931]: I0130 05:42:59.065783 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c2mf"] Jan 30 05:42:59 crc kubenswrapper[4931]: I0130 05:42:59.324857 4931 generic.go:334] "Generic (PLEG): container finished" podID="857de757-e591-4d21-8c09-df06fe672113" containerID="f6a175af3e413d326c864a9a3c6f8a72ada9a5572ccc2a95480d4ece4789f1a7" exitCode=0 Jan 30 05:42:59 crc kubenswrapper[4931]: I0130 05:42:59.324923 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c2mf" event={"ID":"857de757-e591-4d21-8c09-df06fe672113","Type":"ContainerDied","Data":"f6a175af3e413d326c864a9a3c6f8a72ada9a5572ccc2a95480d4ece4789f1a7"} Jan 30 05:42:59 crc kubenswrapper[4931]: I0130 05:42:59.324960 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c2mf" event={"ID":"857de757-e591-4d21-8c09-df06fe672113","Type":"ContainerStarted","Data":"0fd0f29f39dfc2f8fc36c20641bb3dc4341ba8d334f88c8b701b78ba022b2b94"} Jan 30 05:42:59 crc kubenswrapper[4931]: I0130 05:42:59.327564 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 05:43:00 crc kubenswrapper[4931]: I0130 05:43:00.335106 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c2mf" event={"ID":"857de757-e591-4d21-8c09-df06fe672113","Type":"ContainerStarted","Data":"b672890edb88cc835c7a676cc899fe42c12ea91373683fb907536ec1d8955df4"} Jan 30 05:43:01 crc kubenswrapper[4931]: I0130 05:43:01.346526 4931 generic.go:334] "Generic (PLEG): container finished" podID="857de757-e591-4d21-8c09-df06fe672113" containerID="b672890edb88cc835c7a676cc899fe42c12ea91373683fb907536ec1d8955df4" exitCode=0 Jan 30 05:43:01 crc kubenswrapper[4931]: I0130 05:43:01.346615 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c2mf" event={"ID":"857de757-e591-4d21-8c09-df06fe672113","Type":"ContainerDied","Data":"b672890edb88cc835c7a676cc899fe42c12ea91373683fb907536ec1d8955df4"} Jan 30 05:43:02 crc kubenswrapper[4931]: I0130 05:43:02.357444 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c2mf" event={"ID":"857de757-e591-4d21-8c09-df06fe672113","Type":"ContainerStarted","Data":"e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6"} Jan 30 05:43:02 crc kubenswrapper[4931]: I0130 05:43:02.382030 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7c2mf" podStartSLOduration=1.974811441 podStartE2EDuration="4.382008155s" podCreationTimestamp="2026-01-30 05:42:58 +0000 UTC" firstStartedPulling="2026-01-30 05:42:59.327188684 +0000 UTC m=+2114.697098971" lastFinishedPulling="2026-01-30 05:43:01.734385418 +0000 UTC m=+2117.104295685" observedRunningTime="2026-01-30 05:43:02.375075085 +0000 UTC m=+2117.744985362" watchObservedRunningTime="2026-01-30 05:43:02.382008155 +0000 UTC m=+2117.751918422" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.608510 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.609289 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.647026 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pdkc7"] Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.649526 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.664229 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdkc7"] Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.695069 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.730256 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktl6\" (UniqueName: \"kubernetes.io/projected/e047952b-acbc-4cc4-b175-8a23b1926766-kube-api-access-6ktl6\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.730742 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-utilities\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.730913 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-catalog-content\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.832053 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-catalog-content\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.832124 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ktl6\" (UniqueName: \"kubernetes.io/projected/e047952b-acbc-4cc4-b175-8a23b1926766-kube-api-access-6ktl6\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.832203 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-utilities\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.832777 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-utilities\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.832776 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-catalog-content\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.861930 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ktl6\" (UniqueName: \"kubernetes.io/projected/e047952b-acbc-4cc4-b175-8a23b1926766-kube-api-access-6ktl6\") pod \"certified-operators-pdkc7\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:08 crc kubenswrapper[4931]: I0130 05:43:08.974526 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:09 crc kubenswrapper[4931]: I0130 05:43:09.493839 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdkc7"] Jan 30 05:43:09 crc kubenswrapper[4931]: I0130 05:43:09.521202 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:43:10 crc kubenswrapper[4931]: I0130 05:43:10.434504 4931 generic.go:334] "Generic (PLEG): container finished" podID="e047952b-acbc-4cc4-b175-8a23b1926766" containerID="c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb" exitCode=0 Jan 30 05:43:10 crc kubenswrapper[4931]: I0130 05:43:10.435891 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdkc7" event={"ID":"e047952b-acbc-4cc4-b175-8a23b1926766","Type":"ContainerDied","Data":"c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb"} Jan 30 05:43:10 crc kubenswrapper[4931]: I0130 05:43:10.435915 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdkc7" event={"ID":"e047952b-acbc-4cc4-b175-8a23b1926766","Type":"ContainerStarted","Data":"43370026d1dd1ea6668c33c998acaf57c52537d0f5113eb38c1504292cd9a450"} Jan 30 05:43:11 crc kubenswrapper[4931]: I0130 05:43:11.447532 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdkc7" event={"ID":"e047952b-acbc-4cc4-b175-8a23b1926766","Type":"ContainerStarted","Data":"0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f"} Jan 30 05:43:12 crc kubenswrapper[4931]: I0130 05:43:12.456152 4931 generic.go:334] "Generic (PLEG): container finished" podID="e047952b-acbc-4cc4-b175-8a23b1926766" containerID="0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f" exitCode=0 Jan 30 05:43:12 crc kubenswrapper[4931]: I0130 05:43:12.456225 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdkc7" event={"ID":"e047952b-acbc-4cc4-b175-8a23b1926766","Type":"ContainerDied","Data":"0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f"} Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.449263 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g84bs"] Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.450876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.466272 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdkc7" event={"ID":"e047952b-acbc-4cc4-b175-8a23b1926766","Type":"ContainerStarted","Data":"d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44"} Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.493381 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pdkc7" podStartSLOduration=3.078526799 podStartE2EDuration="5.49335773s" podCreationTimestamp="2026-01-30 05:43:08 +0000 UTC" firstStartedPulling="2026-01-30 05:43:10.436674949 +0000 UTC m=+2125.806585196" lastFinishedPulling="2026-01-30 05:43:12.85150588 +0000 UTC m=+2128.221416127" observedRunningTime="2026-01-30 05:43:13.492672411 +0000 UTC m=+2128.862582708" watchObservedRunningTime="2026-01-30 05:43:13.49335773 +0000 UTC m=+2128.863268017" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.509861 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-utilities\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.510071 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br68t\" (UniqueName: \"kubernetes.io/projected/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-kube-api-access-br68t\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.510317 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-catalog-content\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.511765 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g84bs"] Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.611867 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br68t\" (UniqueName: \"kubernetes.io/projected/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-kube-api-access-br68t\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.611983 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-catalog-content\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.612036 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-utilities\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.612593 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-catalog-content\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.612693 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-utilities\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.635664 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br68t\" (UniqueName: \"kubernetes.io/projected/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-kube-api-access-br68t\") pod \"redhat-operators-g84bs\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:13 crc kubenswrapper[4931]: I0130 05:43:13.774182 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:14 crc kubenswrapper[4931]: I0130 05:43:14.217581 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g84bs"] Jan 30 05:43:14 crc kubenswrapper[4931]: W0130 05:43:14.220844 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d7f649a_ec93_4f68_a9c4_a3f979bd4394.slice/crio-342b5a1e36ce4641c2defe1d001ddf0e69d00817239c0204b0c40243055ce4f6 WatchSource:0}: Error finding container 342b5a1e36ce4641c2defe1d001ddf0e69d00817239c0204b0c40243055ce4f6: Status 404 returned error can't find the container with id 342b5a1e36ce4641c2defe1d001ddf0e69d00817239c0204b0c40243055ce4f6 Jan 30 05:43:14 crc kubenswrapper[4931]: I0130 05:43:14.473222 4931 generic.go:334] "Generic (PLEG): container finished" podID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerID="afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23" exitCode=0 Jan 30 05:43:14 crc kubenswrapper[4931]: I0130 05:43:14.473340 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g84bs" event={"ID":"4d7f649a-ec93-4f68-a9c4-a3f979bd4394","Type":"ContainerDied","Data":"afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23"} Jan 30 05:43:14 crc kubenswrapper[4931]: I0130 05:43:14.473364 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g84bs" event={"ID":"4d7f649a-ec93-4f68-a9c4-a3f979bd4394","Type":"ContainerStarted","Data":"342b5a1e36ce4641c2defe1d001ddf0e69d00817239c0204b0c40243055ce4f6"} Jan 30 05:43:15 crc kubenswrapper[4931]: I0130 05:43:15.499625 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g84bs" event={"ID":"4d7f649a-ec93-4f68-a9c4-a3f979bd4394","Type":"ContainerStarted","Data":"02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a"} Jan 30 05:43:16 crc kubenswrapper[4931]: I0130 05:43:16.510130 4931 generic.go:334] "Generic (PLEG): container finished" podID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerID="02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a" exitCode=0 Jan 30 05:43:16 crc kubenswrapper[4931]: I0130 05:43:16.510176 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g84bs" event={"ID":"4d7f649a-ec93-4f68-a9c4-a3f979bd4394","Type":"ContainerDied","Data":"02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a"} Jan 30 05:43:17 crc kubenswrapper[4931]: I0130 05:43:17.519021 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g84bs" event={"ID":"4d7f649a-ec93-4f68-a9c4-a3f979bd4394","Type":"ContainerStarted","Data":"f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1"} Jan 30 05:43:17 crc kubenswrapper[4931]: I0130 05:43:17.542494 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g84bs" podStartSLOduration=2.042985569 podStartE2EDuration="4.542468068s" podCreationTimestamp="2026-01-30 05:43:13 +0000 UTC" firstStartedPulling="2026-01-30 05:43:14.474620851 +0000 UTC m=+2129.844531108" lastFinishedPulling="2026-01-30 05:43:16.97410335 +0000 UTC m=+2132.344013607" observedRunningTime="2026-01-30 05:43:17.53996217 +0000 UTC m=+2132.909872437" watchObservedRunningTime="2026-01-30 05:43:17.542468068 +0000 UTC m=+2132.912378345" Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.238160 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7c2mf"] Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.238841 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7c2mf" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="registry-server" containerID="cri-o://e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6" gracePeriod=2 Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.565102 4931 generic.go:334] "Generic (PLEG): container finished" podID="857de757-e591-4d21-8c09-df06fe672113" containerID="e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6" exitCode=0 Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.567302 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c2mf" event={"ID":"857de757-e591-4d21-8c09-df06fe672113","Type":"ContainerDied","Data":"e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6"} Jan 30 05:43:18 crc kubenswrapper[4931]: E0130 05:43:18.612791 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6 is running failed: container process not found" containerID="e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 05:43:18 crc kubenswrapper[4931]: E0130 05:43:18.620088 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6 is running failed: container process not found" containerID="e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 05:43:18 crc kubenswrapper[4931]: E0130 05:43:18.623629 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6 is running failed: container process not found" containerID="e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 05:43:18 crc kubenswrapper[4931]: E0130 05:43:18.623661 4931 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-7c2mf" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="registry-server" Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.773844 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.912210 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-utilities\") pod \"857de757-e591-4d21-8c09-df06fe672113\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.912316 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-catalog-content\") pod \"857de757-e591-4d21-8c09-df06fe672113\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.912509 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh9fv\" (UniqueName: \"kubernetes.io/projected/857de757-e591-4d21-8c09-df06fe672113-kube-api-access-rh9fv\") pod \"857de757-e591-4d21-8c09-df06fe672113\" (UID: \"857de757-e591-4d21-8c09-df06fe672113\") " Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.915779 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-utilities" (OuterVolumeSpecName: "utilities") pod "857de757-e591-4d21-8c09-df06fe672113" (UID: "857de757-e591-4d21-8c09-df06fe672113"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.920712 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857de757-e591-4d21-8c09-df06fe672113-kube-api-access-rh9fv" (OuterVolumeSpecName: "kube-api-access-rh9fv") pod "857de757-e591-4d21-8c09-df06fe672113" (UID: "857de757-e591-4d21-8c09-df06fe672113"). InnerVolumeSpecName "kube-api-access-rh9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.975524 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.976349 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:18 crc kubenswrapper[4931]: I0130 05:43:18.988156 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "857de757-e591-4d21-8c09-df06fe672113" (UID: "857de757-e591-4d21-8c09-df06fe672113"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.014359 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh9fv\" (UniqueName: \"kubernetes.io/projected/857de757-e591-4d21-8c09-df06fe672113-kube-api-access-rh9fv\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.014391 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.014403 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/857de757-e591-4d21-8c09-df06fe672113-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.032236 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.584499 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c2mf" event={"ID":"857de757-e591-4d21-8c09-df06fe672113","Type":"ContainerDied","Data":"0fd0f29f39dfc2f8fc36c20641bb3dc4341ba8d334f88c8b701b78ba022b2b94"} Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.584568 4931 scope.go:117] "RemoveContainer" containerID="e12e6d3ec4535e7b05b1464e713c13987b8f0fa47679ea5aab67011140b54df6" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.584562 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c2mf" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.609970 4931 scope.go:117] "RemoveContainer" containerID="b672890edb88cc835c7a676cc899fe42c12ea91373683fb907536ec1d8955df4" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.616021 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7c2mf"] Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.624507 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7c2mf"] Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.633610 4931 scope.go:117] "RemoveContainer" containerID="f6a175af3e413d326c864a9a3c6f8a72ada9a5572ccc2a95480d4ece4789f1a7" Jan 30 05:43:19 crc kubenswrapper[4931]: I0130 05:43:19.648084 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:21 crc kubenswrapper[4931]: I0130 05:43:21.436710 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857de757-e591-4d21-8c09-df06fe672113" path="/var/lib/kubelet/pods/857de757-e591-4d21-8c09-df06fe672113/volumes" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.032608 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdkc7"] Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.033662 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pdkc7" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="registry-server" containerID="cri-o://d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44" gracePeriod=2 Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.505400 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.586957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-utilities\") pod \"e047952b-acbc-4cc4-b175-8a23b1926766\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.587039 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-catalog-content\") pod \"e047952b-acbc-4cc4-b175-8a23b1926766\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.587068 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ktl6\" (UniqueName: \"kubernetes.io/projected/e047952b-acbc-4cc4-b175-8a23b1926766-kube-api-access-6ktl6\") pod \"e047952b-acbc-4cc4-b175-8a23b1926766\" (UID: \"e047952b-acbc-4cc4-b175-8a23b1926766\") " Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.588154 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-utilities" (OuterVolumeSpecName: "utilities") pod "e047952b-acbc-4cc4-b175-8a23b1926766" (UID: "e047952b-acbc-4cc4-b175-8a23b1926766"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.592842 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e047952b-acbc-4cc4-b175-8a23b1926766-kube-api-access-6ktl6" (OuterVolumeSpecName: "kube-api-access-6ktl6") pod "e047952b-acbc-4cc4-b175-8a23b1926766" (UID: "e047952b-acbc-4cc4-b175-8a23b1926766"). InnerVolumeSpecName "kube-api-access-6ktl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.614497 4931 generic.go:334] "Generic (PLEG): container finished" podID="e047952b-acbc-4cc4-b175-8a23b1926766" containerID="d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44" exitCode=0 Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.614541 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdkc7" event={"ID":"e047952b-acbc-4cc4-b175-8a23b1926766","Type":"ContainerDied","Data":"d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44"} Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.614555 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdkc7" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.614577 4931 scope.go:117] "RemoveContainer" containerID="d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.614567 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdkc7" event={"ID":"e047952b-acbc-4cc4-b175-8a23b1926766","Type":"ContainerDied","Data":"43370026d1dd1ea6668c33c998acaf57c52537d0f5113eb38c1504292cd9a450"} Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.640905 4931 scope.go:117] "RemoveContainer" containerID="0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.652071 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e047952b-acbc-4cc4-b175-8a23b1926766" (UID: "e047952b-acbc-4cc4-b175-8a23b1926766"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.660841 4931 scope.go:117] "RemoveContainer" containerID="c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.689578 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.689659 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e047952b-acbc-4cc4-b175-8a23b1926766-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.689688 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ktl6\" (UniqueName: \"kubernetes.io/projected/e047952b-acbc-4cc4-b175-8a23b1926766-kube-api-access-6ktl6\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.694352 4931 scope.go:117] "RemoveContainer" containerID="d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44" Jan 30 05:43:23 crc kubenswrapper[4931]: E0130 05:43:23.695014 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44\": container with ID starting with d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44 not found: ID does not exist" containerID="d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.695107 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44"} err="failed to get container status \"d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44\": rpc error: code = NotFound desc = could not find container \"d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44\": container with ID starting with d05f22ac35dab00e228e135d0c3cbe5eb830af18c1f8f76ac91f8c4d34517c44 not found: ID does not exist" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.695180 4931 scope.go:117] "RemoveContainer" containerID="0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f" Jan 30 05:43:23 crc kubenswrapper[4931]: E0130 05:43:23.695797 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f\": container with ID starting with 0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f not found: ID does not exist" containerID="0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.695858 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f"} err="failed to get container status \"0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f\": rpc error: code = NotFound desc = could not find container \"0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f\": container with ID starting with 0429a78ef978a72eeb5b6f269db910d667fae0bc20b8052dab294c709b1d712f not found: ID does not exist" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.695899 4931 scope.go:117] "RemoveContainer" containerID="c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb" Jan 30 05:43:23 crc kubenswrapper[4931]: E0130 05:43:23.696459 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb\": container with ID starting with c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb not found: ID does not exist" containerID="c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.696508 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb"} err="failed to get container status \"c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb\": rpc error: code = NotFound desc = could not find container \"c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb\": container with ID starting with c71840922cabbd330c9f37cc39b32a1a7643783adad5e2815015183517a80dcb not found: ID does not exist" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.775090 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.775166 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.952302 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdkc7"] Jan 30 05:43:23 crc kubenswrapper[4931]: I0130 05:43:23.964181 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pdkc7"] Jan 30 05:43:24 crc kubenswrapper[4931]: I0130 05:43:24.835607 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g84bs" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="registry-server" probeResult="failure" output=< Jan 30 05:43:24 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 05:43:24 crc kubenswrapper[4931]: > Jan 30 05:43:25 crc kubenswrapper[4931]: I0130 05:43:25.438765 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" path="/var/lib/kubelet/pods/e047952b-acbc-4cc4-b175-8a23b1926766/volumes" Jan 30 05:43:27 crc kubenswrapper[4931]: I0130 05:43:27.363144 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:43:27 crc kubenswrapper[4931]: I0130 05:43:27.363207 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:43:33 crc kubenswrapper[4931]: I0130 05:43:33.851008 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:33 crc kubenswrapper[4931]: I0130 05:43:33.920940 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:34 crc kubenswrapper[4931]: I0130 05:43:34.099538 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g84bs"] Jan 30 05:43:35 crc kubenswrapper[4931]: I0130 05:43:35.737009 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g84bs" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="registry-server" containerID="cri-o://f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1" gracePeriod=2 Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.259747 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.394829 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-utilities\") pod \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.394984 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br68t\" (UniqueName: \"kubernetes.io/projected/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-kube-api-access-br68t\") pod \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.395060 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-catalog-content\") pod \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\" (UID: \"4d7f649a-ec93-4f68-a9c4-a3f979bd4394\") " Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.396391 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-utilities" (OuterVolumeSpecName: "utilities") pod "4d7f649a-ec93-4f68-a9c4-a3f979bd4394" (UID: "4d7f649a-ec93-4f68-a9c4-a3f979bd4394"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.405360 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-kube-api-access-br68t" (OuterVolumeSpecName: "kube-api-access-br68t") pod "4d7f649a-ec93-4f68-a9c4-a3f979bd4394" (UID: "4d7f649a-ec93-4f68-a9c4-a3f979bd4394"). InnerVolumeSpecName "kube-api-access-br68t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.498463 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br68t\" (UniqueName: \"kubernetes.io/projected/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-kube-api-access-br68t\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.499731 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.585488 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d7f649a-ec93-4f68-a9c4-a3f979bd4394" (UID: "4d7f649a-ec93-4f68-a9c4-a3f979bd4394"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.601412 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7f649a-ec93-4f68-a9c4-a3f979bd4394-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.747954 4931 generic.go:334] "Generic (PLEG): container finished" podID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerID="f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1" exitCode=0 Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.747996 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g84bs" event={"ID":"4d7f649a-ec93-4f68-a9c4-a3f979bd4394","Type":"ContainerDied","Data":"f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1"} Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.748022 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g84bs" event={"ID":"4d7f649a-ec93-4f68-a9c4-a3f979bd4394","Type":"ContainerDied","Data":"342b5a1e36ce4641c2defe1d001ddf0e69d00817239c0204b0c40243055ce4f6"} Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.748040 4931 scope.go:117] "RemoveContainer" containerID="f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.748068 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g84bs" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.780202 4931 scope.go:117] "RemoveContainer" containerID="02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.816015 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g84bs"] Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.828909 4931 scope.go:117] "RemoveContainer" containerID="afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.830373 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g84bs"] Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.855983 4931 scope.go:117] "RemoveContainer" containerID="f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1" Jan 30 05:43:36 crc kubenswrapper[4931]: E0130 05:43:36.856684 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1\": container with ID starting with f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1 not found: ID does not exist" containerID="f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.856754 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1"} err="failed to get container status \"f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1\": rpc error: code = NotFound desc = could not find container \"f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1\": container with ID starting with f081a7f06933819e7bd133a095a975cc0b7391cf59d391c4be5b65593201afa1 not found: ID does not exist" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.856797 4931 scope.go:117] "RemoveContainer" containerID="02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a" Jan 30 05:43:36 crc kubenswrapper[4931]: E0130 05:43:36.857517 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a\": container with ID starting with 02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a not found: ID does not exist" containerID="02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.857564 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a"} err="failed to get container status \"02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a\": rpc error: code = NotFound desc = could not find container \"02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a\": container with ID starting with 02a8105a0301b9a28a0523b19255e5044e5924cf2163dee95ec23a2ca278682a not found: ID does not exist" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.857632 4931 scope.go:117] "RemoveContainer" containerID="afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23" Jan 30 05:43:36 crc kubenswrapper[4931]: E0130 05:43:36.858147 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23\": container with ID starting with afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23 not found: ID does not exist" containerID="afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23" Jan 30 05:43:36 crc kubenswrapper[4931]: I0130 05:43:36.858201 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23"} err="failed to get container status \"afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23\": rpc error: code = NotFound desc = could not find container \"afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23\": container with ID starting with afb997e59210abc4d4f09acd8d97652f2288e9e4b55bf2612337d1c72bd2ad23 not found: ID does not exist" Jan 30 05:43:37 crc kubenswrapper[4931]: I0130 05:43:37.439501 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" path="/var/lib/kubelet/pods/4d7f649a-ec93-4f68-a9c4-a3f979bd4394/volumes" Jan 30 05:43:57 crc kubenswrapper[4931]: I0130 05:43:57.363413 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:43:57 crc kubenswrapper[4931]: I0130 05:43:57.364698 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:44:27 crc kubenswrapper[4931]: I0130 05:44:27.363618 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:44:27 crc kubenswrapper[4931]: I0130 05:44:27.364300 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:44:27 crc kubenswrapper[4931]: I0130 05:44:27.364368 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:44:27 crc kubenswrapper[4931]: I0130 05:44:27.365310 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:44:27 crc kubenswrapper[4931]: I0130 05:44:27.365471 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" gracePeriod=600 Jan 30 05:44:27 crc kubenswrapper[4931]: E0130 05:44:27.489906 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:44:28 crc kubenswrapper[4931]: I0130 05:44:28.254484 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" exitCode=0 Jan 30 05:44:28 crc kubenswrapper[4931]: I0130 05:44:28.254532 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4"} Jan 30 05:44:28 crc kubenswrapper[4931]: I0130 05:44:28.255520 4931 scope.go:117] "RemoveContainer" containerID="b0bb1a2bae55022b0965d85460d3f0d10d63b5551af2e94575d14b0dc028f44b" Jan 30 05:44:28 crc kubenswrapper[4931]: I0130 05:44:28.256730 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:44:28 crc kubenswrapper[4931]: E0130 05:44:28.257333 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:44:39 crc kubenswrapper[4931]: I0130 05:44:39.422341 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:44:39 crc kubenswrapper[4931]: E0130 05:44:39.423615 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.992702 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kwhpr"] Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993412 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="extract-utilities" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993458 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="extract-utilities" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993487 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993500 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993515 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993530 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993558 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="extract-utilities" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993572 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="extract-utilities" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993590 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="extract-content" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993602 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="extract-content" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993621 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="extract-content" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993635 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="extract-content" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993658 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="extract-content" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993670 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="extract-content" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993696 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="extract-utilities" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993709 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="extract-utilities" Jan 30 05:44:41 crc kubenswrapper[4931]: E0130 05:44:41.993731 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993742 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993957 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e047952b-acbc-4cc4-b175-8a23b1926766" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.993990 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="857de757-e591-4d21-8c09-df06fe672113" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.994019 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7f649a-ec93-4f68-a9c4-a3f979bd4394" containerName="registry-server" Jan 30 05:44:41 crc kubenswrapper[4931]: I0130 05:44:41.995657 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.014146 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwhpr"] Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.067722 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-catalog-content\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.067873 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-utilities\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.067928 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq9dg\" (UniqueName: \"kubernetes.io/projected/689df455-3e6e-462f-bb80-862257e72f80-kube-api-access-cq9dg\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.171254 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-utilities\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.169388 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-utilities\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.171519 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq9dg\" (UniqueName: \"kubernetes.io/projected/689df455-3e6e-462f-bb80-862257e72f80-kube-api-access-cq9dg\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.172493 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-catalog-content\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.173017 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-catalog-content\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.207081 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq9dg\" (UniqueName: \"kubernetes.io/projected/689df455-3e6e-462f-bb80-862257e72f80-kube-api-access-cq9dg\") pod \"redhat-marketplace-kwhpr\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.320898 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:42 crc kubenswrapper[4931]: I0130 05:44:42.598824 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwhpr"] Jan 30 05:44:43 crc kubenswrapper[4931]: I0130 05:44:43.408640 4931 generic.go:334] "Generic (PLEG): container finished" podID="689df455-3e6e-462f-bb80-862257e72f80" containerID="fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff" exitCode=0 Jan 30 05:44:43 crc kubenswrapper[4931]: I0130 05:44:43.410388 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwhpr" event={"ID":"689df455-3e6e-462f-bb80-862257e72f80","Type":"ContainerDied","Data":"fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff"} Jan 30 05:44:43 crc kubenswrapper[4931]: I0130 05:44:43.410511 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwhpr" event={"ID":"689df455-3e6e-462f-bb80-862257e72f80","Type":"ContainerStarted","Data":"167a7f0ace8b81d5e35d65f735e8b0cbec9a1392bc6ad83cb44e5292d753d103"} Jan 30 05:44:44 crc kubenswrapper[4931]: I0130 05:44:44.422598 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwhpr" event={"ID":"689df455-3e6e-462f-bb80-862257e72f80","Type":"ContainerStarted","Data":"51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a"} Jan 30 05:44:45 crc kubenswrapper[4931]: I0130 05:44:45.439947 4931 generic.go:334] "Generic (PLEG): container finished" podID="689df455-3e6e-462f-bb80-862257e72f80" containerID="51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a" exitCode=0 Jan 30 05:44:45 crc kubenswrapper[4931]: I0130 05:44:45.440001 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwhpr" event={"ID":"689df455-3e6e-462f-bb80-862257e72f80","Type":"ContainerDied","Data":"51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a"} Jan 30 05:44:46 crc kubenswrapper[4931]: I0130 05:44:46.455106 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwhpr" event={"ID":"689df455-3e6e-462f-bb80-862257e72f80","Type":"ContainerStarted","Data":"f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6"} Jan 30 05:44:46 crc kubenswrapper[4931]: I0130 05:44:46.488540 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kwhpr" podStartSLOduration=2.812061339 podStartE2EDuration="5.488515281s" podCreationTimestamp="2026-01-30 05:44:41 +0000 UTC" firstStartedPulling="2026-01-30 05:44:43.410902333 +0000 UTC m=+2218.780812630" lastFinishedPulling="2026-01-30 05:44:46.087356275 +0000 UTC m=+2221.457266572" observedRunningTime="2026-01-30 05:44:46.486981829 +0000 UTC m=+2221.856892126" watchObservedRunningTime="2026-01-30 05:44:46.488515281 +0000 UTC m=+2221.858425568" Jan 30 05:44:52 crc kubenswrapper[4931]: I0130 05:44:52.322512 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:52 crc kubenswrapper[4931]: I0130 05:44:52.322887 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:52 crc kubenswrapper[4931]: I0130 05:44:52.406944 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:52 crc kubenswrapper[4931]: I0130 05:44:52.581604 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:52 crc kubenswrapper[4931]: I0130 05:44:52.662007 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwhpr"] Jan 30 05:44:54 crc kubenswrapper[4931]: I0130 05:44:54.423005 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:44:54 crc kubenswrapper[4931]: E0130 05:44:54.423505 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:44:54 crc kubenswrapper[4931]: I0130 05:44:54.520510 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kwhpr" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="registry-server" containerID="cri-o://f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6" gracePeriod=2 Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.035307 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.104207 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-catalog-content\") pod \"689df455-3e6e-462f-bb80-862257e72f80\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.104301 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq9dg\" (UniqueName: \"kubernetes.io/projected/689df455-3e6e-462f-bb80-862257e72f80-kube-api-access-cq9dg\") pod \"689df455-3e6e-462f-bb80-862257e72f80\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.104377 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-utilities\") pod \"689df455-3e6e-462f-bb80-862257e72f80\" (UID: \"689df455-3e6e-462f-bb80-862257e72f80\") " Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.106035 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-utilities" (OuterVolumeSpecName: "utilities") pod "689df455-3e6e-462f-bb80-862257e72f80" (UID: "689df455-3e6e-462f-bb80-862257e72f80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.111295 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/689df455-3e6e-462f-bb80-862257e72f80-kube-api-access-cq9dg" (OuterVolumeSpecName: "kube-api-access-cq9dg") pod "689df455-3e6e-462f-bb80-862257e72f80" (UID: "689df455-3e6e-462f-bb80-862257e72f80"). InnerVolumeSpecName "kube-api-access-cq9dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.145570 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "689df455-3e6e-462f-bb80-862257e72f80" (UID: "689df455-3e6e-462f-bb80-862257e72f80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.205574 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.205632 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq9dg\" (UniqueName: \"kubernetes.io/projected/689df455-3e6e-462f-bb80-862257e72f80-kube-api-access-cq9dg\") on node \"crc\" DevicePath \"\"" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.205653 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/689df455-3e6e-462f-bb80-862257e72f80-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.535146 4931 generic.go:334] "Generic (PLEG): container finished" podID="689df455-3e6e-462f-bb80-862257e72f80" containerID="f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6" exitCode=0 Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.535374 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwhpr" event={"ID":"689df455-3e6e-462f-bb80-862257e72f80","Type":"ContainerDied","Data":"f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6"} Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.535490 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwhpr" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.535705 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwhpr" event={"ID":"689df455-3e6e-462f-bb80-862257e72f80","Type":"ContainerDied","Data":"167a7f0ace8b81d5e35d65f735e8b0cbec9a1392bc6ad83cb44e5292d753d103"} Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.535749 4931 scope.go:117] "RemoveContainer" containerID="f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.568717 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwhpr"] Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.585041 4931 scope.go:117] "RemoveContainer" containerID="51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.594387 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwhpr"] Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.620920 4931 scope.go:117] "RemoveContainer" containerID="fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.649612 4931 scope.go:117] "RemoveContainer" containerID="f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6" Jan 30 05:44:55 crc kubenswrapper[4931]: E0130 05:44:55.650274 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6\": container with ID starting with f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6 not found: ID does not exist" containerID="f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.650440 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6"} err="failed to get container status \"f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6\": rpc error: code = NotFound desc = could not find container \"f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6\": container with ID starting with f20934b49aab3b3104e1fd8262b01dfc8e53d096dd53c230a1c774602e8f67e6 not found: ID does not exist" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.650475 4931 scope.go:117] "RemoveContainer" containerID="51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a" Jan 30 05:44:55 crc kubenswrapper[4931]: E0130 05:44:55.651069 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a\": container with ID starting with 51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a not found: ID does not exist" containerID="51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.651109 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a"} err="failed to get container status \"51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a\": rpc error: code = NotFound desc = could not find container \"51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a\": container with ID starting with 51989f346674ebcf63e05a7e391b45094c69a90b5d75791219a49a5e5c5dea5a not found: ID does not exist" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.651141 4931 scope.go:117] "RemoveContainer" containerID="fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff" Jan 30 05:44:55 crc kubenswrapper[4931]: E0130 05:44:55.651708 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff\": container with ID starting with fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff not found: ID does not exist" containerID="fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff" Jan 30 05:44:55 crc kubenswrapper[4931]: I0130 05:44:55.651744 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff"} err="failed to get container status \"fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff\": rpc error: code = NotFound desc = could not find container \"fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff\": container with ID starting with fd23b52f69bb492366f239daeef8b41fa8300afb3a19cda8e307c939758a28ff not found: ID does not exist" Jan 30 05:44:57 crc kubenswrapper[4931]: I0130 05:44:57.437612 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="689df455-3e6e-462f-bb80-862257e72f80" path="/var/lib/kubelet/pods/689df455-3e6e-462f-bb80-862257e72f80/volumes" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.163059 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n"] Jan 30 05:45:00 crc kubenswrapper[4931]: E0130 05:45:00.164166 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="extract-utilities" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.164202 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="extract-utilities" Jan 30 05:45:00 crc kubenswrapper[4931]: E0130 05:45:00.164237 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="registry-server" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.164249 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="registry-server" Jan 30 05:45:00 crc kubenswrapper[4931]: E0130 05:45:00.164300 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="extract-content" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.164316 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="extract-content" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.164645 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="689df455-3e6e-462f-bb80-862257e72f80" containerName="registry-server" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.165621 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.168362 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.172665 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.175024 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n"] Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.286702 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ad7b66-28c5-436b-9dc4-86be3d48787b-secret-volume\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.286817 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpwj\" (UniqueName: \"kubernetes.io/projected/71ad7b66-28c5-436b-9dc4-86be3d48787b-kube-api-access-7hpwj\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.286919 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ad7b66-28c5-436b-9dc4-86be3d48787b-config-volume\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.387893 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hpwj\" (UniqueName: \"kubernetes.io/projected/71ad7b66-28c5-436b-9dc4-86be3d48787b-kube-api-access-7hpwj\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.388029 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ad7b66-28c5-436b-9dc4-86be3d48787b-config-volume\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.388144 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ad7b66-28c5-436b-9dc4-86be3d48787b-secret-volume\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.389540 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ad7b66-28c5-436b-9dc4-86be3d48787b-config-volume\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.400468 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ad7b66-28c5-436b-9dc4-86be3d48787b-secret-volume\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.406738 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hpwj\" (UniqueName: \"kubernetes.io/projected/71ad7b66-28c5-436b-9dc4-86be3d48787b-kube-api-access-7hpwj\") pod \"collect-profiles-29495865-q2t6n\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.499842 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:00 crc kubenswrapper[4931]: I0130 05:45:00.996270 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n"] Jan 30 05:45:01 crc kubenswrapper[4931]: I0130 05:45:01.586818 4931 generic.go:334] "Generic (PLEG): container finished" podID="71ad7b66-28c5-436b-9dc4-86be3d48787b" containerID="0262628a4935b4dc10f986d98e7493ff62eab4841805fce6eb8783a9ef5f62e3" exitCode=0 Jan 30 05:45:01 crc kubenswrapper[4931]: I0130 05:45:01.586947 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" event={"ID":"71ad7b66-28c5-436b-9dc4-86be3d48787b","Type":"ContainerDied","Data":"0262628a4935b4dc10f986d98e7493ff62eab4841805fce6eb8783a9ef5f62e3"} Jan 30 05:45:01 crc kubenswrapper[4931]: I0130 05:45:01.588842 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" event={"ID":"71ad7b66-28c5-436b-9dc4-86be3d48787b","Type":"ContainerStarted","Data":"03eafa58de30069d7ac5ebf94066c455d3671932c3763fd86e4f19f69f3d9e4c"} Jan 30 05:45:02 crc kubenswrapper[4931]: I0130 05:45:02.966868 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.029596 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ad7b66-28c5-436b-9dc4-86be3d48787b-secret-volume\") pod \"71ad7b66-28c5-436b-9dc4-86be3d48787b\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.029690 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ad7b66-28c5-436b-9dc4-86be3d48787b-config-volume\") pod \"71ad7b66-28c5-436b-9dc4-86be3d48787b\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.029751 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hpwj\" (UniqueName: \"kubernetes.io/projected/71ad7b66-28c5-436b-9dc4-86be3d48787b-kube-api-access-7hpwj\") pod \"71ad7b66-28c5-436b-9dc4-86be3d48787b\" (UID: \"71ad7b66-28c5-436b-9dc4-86be3d48787b\") " Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.032900 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ad7b66-28c5-436b-9dc4-86be3d48787b-config-volume" (OuterVolumeSpecName: "config-volume") pod "71ad7b66-28c5-436b-9dc4-86be3d48787b" (UID: "71ad7b66-28c5-436b-9dc4-86be3d48787b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.037996 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ad7b66-28c5-436b-9dc4-86be3d48787b-kube-api-access-7hpwj" (OuterVolumeSpecName: "kube-api-access-7hpwj") pod "71ad7b66-28c5-436b-9dc4-86be3d48787b" (UID: "71ad7b66-28c5-436b-9dc4-86be3d48787b"). InnerVolumeSpecName "kube-api-access-7hpwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.039980 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ad7b66-28c5-436b-9dc4-86be3d48787b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "71ad7b66-28c5-436b-9dc4-86be3d48787b" (UID: "71ad7b66-28c5-436b-9dc4-86be3d48787b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.132098 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ad7b66-28c5-436b-9dc4-86be3d48787b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.132144 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ad7b66-28c5-436b-9dc4-86be3d48787b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.132164 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hpwj\" (UniqueName: \"kubernetes.io/projected/71ad7b66-28c5-436b-9dc4-86be3d48787b-kube-api-access-7hpwj\") on node \"crc\" DevicePath \"\"" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.608351 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" event={"ID":"71ad7b66-28c5-436b-9dc4-86be3d48787b","Type":"ContainerDied","Data":"03eafa58de30069d7ac5ebf94066c455d3671932c3763fd86e4f19f69f3d9e4c"} Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.608710 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03eafa58de30069d7ac5ebf94066c455d3671932c3763fd86e4f19f69f3d9e4c" Jan 30 05:45:03 crc kubenswrapper[4931]: I0130 05:45:03.608491 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n" Jan 30 05:45:04 crc kubenswrapper[4931]: I0130 05:45:04.087099 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g"] Jan 30 05:45:04 crc kubenswrapper[4931]: I0130 05:45:04.096382 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495820-5cp8g"] Jan 30 05:45:05 crc kubenswrapper[4931]: I0130 05:45:05.442028 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:45:05 crc kubenswrapper[4931]: E0130 05:45:05.442579 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:45:05 crc kubenswrapper[4931]: I0130 05:45:05.460844 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a8f99a6-f163-4720-8eb4-bc8607753d79" path="/var/lib/kubelet/pods/1a8f99a6-f163-4720-8eb4-bc8607753d79/volumes" Jan 30 05:45:18 crc kubenswrapper[4931]: I0130 05:45:18.421922 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:45:18 crc kubenswrapper[4931]: E0130 05:45:18.422873 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:45:29 crc kubenswrapper[4931]: I0130 05:45:29.423066 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:45:29 crc kubenswrapper[4931]: E0130 05:45:29.424362 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:45:41 crc kubenswrapper[4931]: I0130 05:45:41.422840 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:45:41 crc kubenswrapper[4931]: E0130 05:45:41.423912 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:45:52 crc kubenswrapper[4931]: I0130 05:45:52.206469 4931 scope.go:117] "RemoveContainer" containerID="76f686a64b7bcba52e9cf572d78b41631a5873f435dbdf098126fe32ac5ccc3f" Jan 30 05:45:53 crc kubenswrapper[4931]: I0130 05:45:53.422217 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:45:53 crc kubenswrapper[4931]: E0130 05:45:53.422773 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:46:07 crc kubenswrapper[4931]: I0130 05:46:07.422925 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:46:07 crc kubenswrapper[4931]: E0130 05:46:07.424275 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:46:22 crc kubenswrapper[4931]: I0130 05:46:22.421891 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:46:22 crc kubenswrapper[4931]: E0130 05:46:22.423016 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:46:34 crc kubenswrapper[4931]: I0130 05:46:34.422856 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:46:34 crc kubenswrapper[4931]: E0130 05:46:34.423877 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:46:47 crc kubenswrapper[4931]: I0130 05:46:47.422220 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:46:47 crc kubenswrapper[4931]: E0130 05:46:47.423270 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:47:02 crc kubenswrapper[4931]: I0130 05:47:02.421622 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:47:02 crc kubenswrapper[4931]: E0130 05:47:02.422246 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:47:15 crc kubenswrapper[4931]: I0130 05:47:15.429283 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:47:15 crc kubenswrapper[4931]: E0130 05:47:15.430717 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:47:30 crc kubenswrapper[4931]: I0130 05:47:30.422743 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:47:30 crc kubenswrapper[4931]: E0130 05:47:30.423453 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:47:45 crc kubenswrapper[4931]: I0130 05:47:45.434535 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:47:45 crc kubenswrapper[4931]: E0130 05:47:45.436314 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:47:58 crc kubenswrapper[4931]: I0130 05:47:58.437235 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:47:58 crc kubenswrapper[4931]: E0130 05:47:58.438473 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:48:12 crc kubenswrapper[4931]: I0130 05:48:12.422400 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:48:12 crc kubenswrapper[4931]: E0130 05:48:12.423598 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:48:25 crc kubenswrapper[4931]: I0130 05:48:25.429987 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:48:25 crc kubenswrapper[4931]: E0130 05:48:25.431259 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:48:39 crc kubenswrapper[4931]: I0130 05:48:39.421920 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:48:39 crc kubenswrapper[4931]: E0130 05:48:39.423008 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:48:52 crc kubenswrapper[4931]: I0130 05:48:52.422795 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:48:52 crc kubenswrapper[4931]: E0130 05:48:52.423826 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:49:03 crc kubenswrapper[4931]: I0130 05:49:03.423143 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:49:03 crc kubenswrapper[4931]: E0130 05:49:03.424109 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:49:16 crc kubenswrapper[4931]: I0130 05:49:16.422090 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:49:16 crc kubenswrapper[4931]: E0130 05:49:16.422921 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:49:31 crc kubenswrapper[4931]: I0130 05:49:31.422635 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:49:32 crc kubenswrapper[4931]: I0130 05:49:32.183601 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"3bffa97502e128e6feab5bb4faa23ada0376dcd29bfbd235484da7266eed26ff"} Jan 30 05:51:57 crc kubenswrapper[4931]: I0130 05:51:57.362974 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:51:57 crc kubenswrapper[4931]: I0130 05:51:57.364606 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:52:27 crc kubenswrapper[4931]: I0130 05:52:27.363873 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:52:27 crc kubenswrapper[4931]: I0130 05:52:27.364577 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:52:57 crc kubenswrapper[4931]: I0130 05:52:57.363183 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:52:57 crc kubenswrapper[4931]: I0130 05:52:57.363782 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:52:57 crc kubenswrapper[4931]: I0130 05:52:57.363838 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:52:57 crc kubenswrapper[4931]: I0130 05:52:57.364529 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bffa97502e128e6feab5bb4faa23ada0376dcd29bfbd235484da7266eed26ff"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:52:57 crc kubenswrapper[4931]: I0130 05:52:57.364605 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://3bffa97502e128e6feab5bb4faa23ada0376dcd29bfbd235484da7266eed26ff" gracePeriod=600 Jan 30 05:52:58 crc kubenswrapper[4931]: I0130 05:52:58.137506 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="3bffa97502e128e6feab5bb4faa23ada0376dcd29bfbd235484da7266eed26ff" exitCode=0 Jan 30 05:52:58 crc kubenswrapper[4931]: I0130 05:52:58.137614 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"3bffa97502e128e6feab5bb4faa23ada0376dcd29bfbd235484da7266eed26ff"} Jan 30 05:52:58 crc kubenswrapper[4931]: I0130 05:52:58.137885 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb"} Jan 30 05:52:58 crc kubenswrapper[4931]: I0130 05:52:58.137910 4931 scope.go:117] "RemoveContainer" containerID="e584eab46a61f7359d189dfe47f467606f0c8ed3cbb55e61552452b295cc93b4" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.702335 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j5fj9"] Jan 30 05:53:08 crc kubenswrapper[4931]: E0130 05:53:08.703718 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ad7b66-28c5-436b-9dc4-86be3d48787b" containerName="collect-profiles" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.703742 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ad7b66-28c5-436b-9dc4-86be3d48787b" containerName="collect-profiles" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.703991 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ad7b66-28c5-436b-9dc4-86be3d48787b" containerName="collect-profiles" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.705696 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.730593 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5fj9"] Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.852688 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt87w\" (UniqueName: \"kubernetes.io/projected/c5cc4dba-5433-4509-bf60-d080a781977b-kube-api-access-pt87w\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.852824 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-catalog-content\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.852971 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-utilities\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.954108 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt87w\" (UniqueName: \"kubernetes.io/projected/c5cc4dba-5433-4509-bf60-d080a781977b-kube-api-access-pt87w\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.954165 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-catalog-content\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.954257 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-utilities\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.954783 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-utilities\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.954832 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-catalog-content\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:08 crc kubenswrapper[4931]: I0130 05:53:08.988384 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt87w\" (UniqueName: \"kubernetes.io/projected/c5cc4dba-5433-4509-bf60-d080a781977b-kube-api-access-pt87w\") pod \"community-operators-j5fj9\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.039262 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.268284 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v7p6c"] Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.270808 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.280598 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7p6c"] Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.359560 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-catalog-content\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.359618 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-utilities\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.359648 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xvf\" (UniqueName: \"kubernetes.io/projected/962ef2ba-af31-4ef4-a699-dd69242ec082-kube-api-access-h9xvf\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.461087 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-catalog-content\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.461143 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-utilities\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.461175 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9xvf\" (UniqueName: \"kubernetes.io/projected/962ef2ba-af31-4ef4-a699-dd69242ec082-kube-api-access-h9xvf\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.461675 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-catalog-content\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.461813 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-utilities\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.479254 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9xvf\" (UniqueName: \"kubernetes.io/projected/962ef2ba-af31-4ef4-a699-dd69242ec082-kube-api-access-h9xvf\") pod \"certified-operators-v7p6c\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.540714 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5fj9"] Jan 30 05:53:09 crc kubenswrapper[4931]: I0130 05:53:09.595709 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:10 crc kubenswrapper[4931]: I0130 05:53:10.056421 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7p6c"] Jan 30 05:53:10 crc kubenswrapper[4931]: W0130 05:53:10.061531 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod962ef2ba_af31_4ef4_a699_dd69242ec082.slice/crio-480315bed4d6d9f6b661589af2e73b383826b6ce68cbba795e0ca02d6721c48e WatchSource:0}: Error finding container 480315bed4d6d9f6b661589af2e73b383826b6ce68cbba795e0ca02d6721c48e: Status 404 returned error can't find the container with id 480315bed4d6d9f6b661589af2e73b383826b6ce68cbba795e0ca02d6721c48e Jan 30 05:53:10 crc kubenswrapper[4931]: I0130 05:53:10.242539 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerStarted","Data":"fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc"} Jan 30 05:53:10 crc kubenswrapper[4931]: I0130 05:53:10.242580 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerStarted","Data":"480315bed4d6d9f6b661589af2e73b383826b6ce68cbba795e0ca02d6721c48e"} Jan 30 05:53:10 crc kubenswrapper[4931]: I0130 05:53:10.251383 4931 generic.go:334] "Generic (PLEG): container finished" podID="c5cc4dba-5433-4509-bf60-d080a781977b" containerID="c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266" exitCode=0 Jan 30 05:53:10 crc kubenswrapper[4931]: I0130 05:53:10.251438 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5fj9" event={"ID":"c5cc4dba-5433-4509-bf60-d080a781977b","Type":"ContainerDied","Data":"c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266"} Jan 30 05:53:10 crc kubenswrapper[4931]: I0130 05:53:10.251461 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5fj9" event={"ID":"c5cc4dba-5433-4509-bf60-d080a781977b","Type":"ContainerStarted","Data":"38a774b986b60a9f602dd779e9c808da57377b8f459ce6e5789f360ff4050322"} Jan 30 05:53:10 crc kubenswrapper[4931]: I0130 05:53:10.253689 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 05:53:11 crc kubenswrapper[4931]: I0130 05:53:11.264463 4931 generic.go:334] "Generic (PLEG): container finished" podID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerID="fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc" exitCode=0 Jan 30 05:53:11 crc kubenswrapper[4931]: I0130 05:53:11.264564 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerDied","Data":"fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc"} Jan 30 05:53:11 crc kubenswrapper[4931]: I0130 05:53:11.270735 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5fj9" event={"ID":"c5cc4dba-5433-4509-bf60-d080a781977b","Type":"ContainerStarted","Data":"aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1"} Jan 30 05:53:12 crc kubenswrapper[4931]: I0130 05:53:12.283320 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerStarted","Data":"fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852"} Jan 30 05:53:12 crc kubenswrapper[4931]: I0130 05:53:12.288140 4931 generic.go:334] "Generic (PLEG): container finished" podID="c5cc4dba-5433-4509-bf60-d080a781977b" containerID="aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1" exitCode=0 Jan 30 05:53:12 crc kubenswrapper[4931]: I0130 05:53:12.288195 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5fj9" event={"ID":"c5cc4dba-5433-4509-bf60-d080a781977b","Type":"ContainerDied","Data":"aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1"} Jan 30 05:53:13 crc kubenswrapper[4931]: I0130 05:53:13.300715 4931 generic.go:334] "Generic (PLEG): container finished" podID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerID="fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852" exitCode=0 Jan 30 05:53:13 crc kubenswrapper[4931]: I0130 05:53:13.300771 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerDied","Data":"fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852"} Jan 30 05:53:13 crc kubenswrapper[4931]: I0130 05:53:13.304927 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5fj9" event={"ID":"c5cc4dba-5433-4509-bf60-d080a781977b","Type":"ContainerStarted","Data":"e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886"} Jan 30 05:53:13 crc kubenswrapper[4931]: I0130 05:53:13.377109 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j5fj9" podStartSLOduration=2.932372466 podStartE2EDuration="5.377084499s" podCreationTimestamp="2026-01-30 05:53:08 +0000 UTC" firstStartedPulling="2026-01-30 05:53:10.253498883 +0000 UTC m=+2725.623409140" lastFinishedPulling="2026-01-30 05:53:12.698210886 +0000 UTC m=+2728.068121173" observedRunningTime="2026-01-30 05:53:13.360851607 +0000 UTC m=+2728.730761894" watchObservedRunningTime="2026-01-30 05:53:13.377084499 +0000 UTC m=+2728.746994796" Jan 30 05:53:14 crc kubenswrapper[4931]: I0130 05:53:14.315521 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerStarted","Data":"98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7"} Jan 30 05:53:14 crc kubenswrapper[4931]: I0130 05:53:14.344754 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v7p6c" podStartSLOduration=2.927179895 podStartE2EDuration="5.344736692s" podCreationTimestamp="2026-01-30 05:53:09 +0000 UTC" firstStartedPulling="2026-01-30 05:53:11.267573079 +0000 UTC m=+2726.637483386" lastFinishedPulling="2026-01-30 05:53:13.685129916 +0000 UTC m=+2729.055040183" observedRunningTime="2026-01-30 05:53:14.340409821 +0000 UTC m=+2729.710320068" watchObservedRunningTime="2026-01-30 05:53:14.344736692 +0000 UTC m=+2729.714646949" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.040216 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.041035 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.104452 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.435572 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.597267 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.597358 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.646500 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:19 crc kubenswrapper[4931]: I0130 05:53:19.659040 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5fj9"] Jan 30 05:53:20 crc kubenswrapper[4931]: I0130 05:53:20.422980 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:21 crc kubenswrapper[4931]: I0130 05:53:21.376189 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j5fj9" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="registry-server" containerID="cri-o://e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886" gracePeriod=2 Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.044853 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.073691 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7p6c"] Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.158398 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-utilities\") pod \"c5cc4dba-5433-4509-bf60-d080a781977b\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.158549 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt87w\" (UniqueName: \"kubernetes.io/projected/c5cc4dba-5433-4509-bf60-d080a781977b-kube-api-access-pt87w\") pod \"c5cc4dba-5433-4509-bf60-d080a781977b\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.158697 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-catalog-content\") pod \"c5cc4dba-5433-4509-bf60-d080a781977b\" (UID: \"c5cc4dba-5433-4509-bf60-d080a781977b\") " Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.160275 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-utilities" (OuterVolumeSpecName: "utilities") pod "c5cc4dba-5433-4509-bf60-d080a781977b" (UID: "c5cc4dba-5433-4509-bf60-d080a781977b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.167013 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5cc4dba-5433-4509-bf60-d080a781977b-kube-api-access-pt87w" (OuterVolumeSpecName: "kube-api-access-pt87w") pod "c5cc4dba-5433-4509-bf60-d080a781977b" (UID: "c5cc4dba-5433-4509-bf60-d080a781977b"). InnerVolumeSpecName "kube-api-access-pt87w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.243922 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5cc4dba-5433-4509-bf60-d080a781977b" (UID: "c5cc4dba-5433-4509-bf60-d080a781977b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.261060 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.261108 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt87w\" (UniqueName: \"kubernetes.io/projected/c5cc4dba-5433-4509-bf60-d080a781977b-kube-api-access-pt87w\") on node \"crc\" DevicePath \"\"" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.261129 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5cc4dba-5433-4509-bf60-d080a781977b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.391114 4931 generic.go:334] "Generic (PLEG): container finished" podID="c5cc4dba-5433-4509-bf60-d080a781977b" containerID="e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886" exitCode=0 Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.391200 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5fj9" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.391204 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5fj9" event={"ID":"c5cc4dba-5433-4509-bf60-d080a781977b","Type":"ContainerDied","Data":"e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886"} Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.391390 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5fj9" event={"ID":"c5cc4dba-5433-4509-bf60-d080a781977b","Type":"ContainerDied","Data":"38a774b986b60a9f602dd779e9c808da57377b8f459ce6e5789f360ff4050322"} Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.391456 4931 scope.go:117] "RemoveContainer" containerID="e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.392151 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v7p6c" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="registry-server" containerID="cri-o://98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7" gracePeriod=2 Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.441764 4931 scope.go:117] "RemoveContainer" containerID="aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.451732 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5fj9"] Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.463447 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j5fj9"] Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.465670 4931 scope.go:117] "RemoveContainer" containerID="c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.585452 4931 scope.go:117] "RemoveContainer" containerID="e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886" Jan 30 05:53:22 crc kubenswrapper[4931]: E0130 05:53:22.586011 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886\": container with ID starting with e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886 not found: ID does not exist" containerID="e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.586052 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886"} err="failed to get container status \"e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886\": rpc error: code = NotFound desc = could not find container \"e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886\": container with ID starting with e3eeafbbfced90de8534642c23434c1e4d70db646b394266f1065f08c747a886 not found: ID does not exist" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.586077 4931 scope.go:117] "RemoveContainer" containerID="aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1" Jan 30 05:53:22 crc kubenswrapper[4931]: E0130 05:53:22.586711 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1\": container with ID starting with aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1 not found: ID does not exist" containerID="aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.586774 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1"} err="failed to get container status \"aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1\": rpc error: code = NotFound desc = could not find container \"aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1\": container with ID starting with aeb9770910e3b9f6716b8f9e0244c9ac9f49c525029df601a9cad4898f3628c1 not found: ID does not exist" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.586816 4931 scope.go:117] "RemoveContainer" containerID="c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266" Jan 30 05:53:22 crc kubenswrapper[4931]: E0130 05:53:22.587153 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266\": container with ID starting with c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266 not found: ID does not exist" containerID="c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.587184 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266"} err="failed to get container status \"c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266\": rpc error: code = NotFound desc = could not find container \"c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266\": container with ID starting with c3c52c593d718f3b446da1c8d4b7a0369c62a74499d34ed1819c126fb7db7266 not found: ID does not exist" Jan 30 05:53:22 crc kubenswrapper[4931]: I0130 05:53:22.973378 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.086579 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-catalog-content\") pod \"962ef2ba-af31-4ef4-a699-dd69242ec082\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.086681 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9xvf\" (UniqueName: \"kubernetes.io/projected/962ef2ba-af31-4ef4-a699-dd69242ec082-kube-api-access-h9xvf\") pod \"962ef2ba-af31-4ef4-a699-dd69242ec082\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.086727 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-utilities\") pod \"962ef2ba-af31-4ef4-a699-dd69242ec082\" (UID: \"962ef2ba-af31-4ef4-a699-dd69242ec082\") " Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.087687 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-utilities" (OuterVolumeSpecName: "utilities") pod "962ef2ba-af31-4ef4-a699-dd69242ec082" (UID: "962ef2ba-af31-4ef4-a699-dd69242ec082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.090604 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962ef2ba-af31-4ef4-a699-dd69242ec082-kube-api-access-h9xvf" (OuterVolumeSpecName: "kube-api-access-h9xvf") pod "962ef2ba-af31-4ef4-a699-dd69242ec082" (UID: "962ef2ba-af31-4ef4-a699-dd69242ec082"). InnerVolumeSpecName "kube-api-access-h9xvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.145085 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "962ef2ba-af31-4ef4-a699-dd69242ec082" (UID: "962ef2ba-af31-4ef4-a699-dd69242ec082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.188948 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.189013 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9xvf\" (UniqueName: \"kubernetes.io/projected/962ef2ba-af31-4ef4-a699-dd69242ec082-kube-api-access-h9xvf\") on node \"crc\" DevicePath \"\"" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.189038 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/962ef2ba-af31-4ef4-a699-dd69242ec082-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.404616 4931 generic.go:334] "Generic (PLEG): container finished" podID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerID="98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7" exitCode=0 Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.404683 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerDied","Data":"98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7"} Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.404744 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7p6c" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.405156 4931 scope.go:117] "RemoveContainer" containerID="98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.405132 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7p6c" event={"ID":"962ef2ba-af31-4ef4-a699-dd69242ec082","Type":"ContainerDied","Data":"480315bed4d6d9f6b661589af2e73b383826b6ce68cbba795e0ca02d6721c48e"} Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.437248 4931 scope.go:117] "RemoveContainer" containerID="fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.442197 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" path="/var/lib/kubelet/pods/c5cc4dba-5433-4509-bf60-d080a781977b/volumes" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.470603 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v7p6c"] Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.481664 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v7p6c"] Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.491911 4931 scope.go:117] "RemoveContainer" containerID="fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.515729 4931 scope.go:117] "RemoveContainer" containerID="98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7" Jan 30 05:53:23 crc kubenswrapper[4931]: E0130 05:53:23.516824 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7\": container with ID starting with 98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7 not found: ID does not exist" containerID="98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.516863 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7"} err="failed to get container status \"98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7\": rpc error: code = NotFound desc = could not find container \"98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7\": container with ID starting with 98a4f16a2d3e2b5814fd2d6e8656d4f876ee224f408b34699b7569c875dbc6f7 not found: ID does not exist" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.516889 4931 scope.go:117] "RemoveContainer" containerID="fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852" Jan 30 05:53:23 crc kubenswrapper[4931]: E0130 05:53:23.517733 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852\": container with ID starting with fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852 not found: ID does not exist" containerID="fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.517886 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852"} err="failed to get container status \"fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852\": rpc error: code = NotFound desc = could not find container \"fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852\": container with ID starting with fa57e4401c4a867d3d80c1c7b2e0c5a34a1770e292a6ee56501504b6b128a852 not found: ID does not exist" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.517958 4931 scope.go:117] "RemoveContainer" containerID="fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc" Jan 30 05:53:23 crc kubenswrapper[4931]: E0130 05:53:23.518869 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc\": container with ID starting with fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc not found: ID does not exist" containerID="fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc" Jan 30 05:53:23 crc kubenswrapper[4931]: I0130 05:53:23.518919 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc"} err="failed to get container status \"fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc\": rpc error: code = NotFound desc = could not find container \"fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc\": container with ID starting with fc423a2426262ce36e49f1537c46892fde45a02e00935ca420e4803025a86ebc not found: ID does not exist" Jan 30 05:53:25 crc kubenswrapper[4931]: I0130 05:53:25.439234 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" path="/var/lib/kubelet/pods/962ef2ba-af31-4ef4-a699-dd69242ec082/volumes" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.116272 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xxts8"] Jan 30 05:54:23 crc kubenswrapper[4931]: E0130 05:54:23.117451 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="extract-content" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117474 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="extract-content" Jan 30 05:54:23 crc kubenswrapper[4931]: E0130 05:54:23.117491 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="registry-server" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117501 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="registry-server" Jan 30 05:54:23 crc kubenswrapper[4931]: E0130 05:54:23.117529 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="extract-content" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117540 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="extract-content" Jan 30 05:54:23 crc kubenswrapper[4931]: E0130 05:54:23.117565 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="extract-utilities" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117575 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="extract-utilities" Jan 30 05:54:23 crc kubenswrapper[4931]: E0130 05:54:23.117596 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="extract-utilities" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117606 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="extract-utilities" Jan 30 05:54:23 crc kubenswrapper[4931]: E0130 05:54:23.117628 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="registry-server" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117640 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="registry-server" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117832 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5cc4dba-5433-4509-bf60-d080a781977b" containerName="registry-server" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.117876 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="962ef2ba-af31-4ef4-a699-dd69242ec082" containerName="registry-server" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.119366 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.137483 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxts8"] Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.296648 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-utilities\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.296700 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-catalog-content\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.296812 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t27j\" (UniqueName: \"kubernetes.io/projected/64eb9b0a-7a6b-479c-93ee-118642bac30f-kube-api-access-7t27j\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.398408 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t27j\" (UniqueName: \"kubernetes.io/projected/64eb9b0a-7a6b-479c-93ee-118642bac30f-kube-api-access-7t27j\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.398531 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-utilities\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.398569 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-catalog-content\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.398983 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-utilities\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.399054 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-catalog-content\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.419168 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t27j\" (UniqueName: \"kubernetes.io/projected/64eb9b0a-7a6b-479c-93ee-118642bac30f-kube-api-access-7t27j\") pod \"redhat-operators-xxts8\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.438684 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.904280 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxts8"] Jan 30 05:54:23 crc kubenswrapper[4931]: I0130 05:54:23.985528 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxts8" event={"ID":"64eb9b0a-7a6b-479c-93ee-118642bac30f","Type":"ContainerStarted","Data":"5e48aaf71e51dd10317bd589006ab382d0e5ecebbc057cd2b565b569c9c29556"} Jan 30 05:54:24 crc kubenswrapper[4931]: I0130 05:54:24.997043 4931 generic.go:334] "Generic (PLEG): container finished" podID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerID="69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3" exitCode=0 Jan 30 05:54:24 crc kubenswrapper[4931]: I0130 05:54:24.997235 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxts8" event={"ID":"64eb9b0a-7a6b-479c-93ee-118642bac30f","Type":"ContainerDied","Data":"69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3"} Jan 30 05:54:26 crc kubenswrapper[4931]: I0130 05:54:26.011914 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxts8" event={"ID":"64eb9b0a-7a6b-479c-93ee-118642bac30f","Type":"ContainerStarted","Data":"3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4"} Jan 30 05:54:27 crc kubenswrapper[4931]: I0130 05:54:27.024539 4931 generic.go:334] "Generic (PLEG): container finished" podID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerID="3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4" exitCode=0 Jan 30 05:54:27 crc kubenswrapper[4931]: I0130 05:54:27.024609 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxts8" event={"ID":"64eb9b0a-7a6b-479c-93ee-118642bac30f","Type":"ContainerDied","Data":"3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4"} Jan 30 05:54:28 crc kubenswrapper[4931]: I0130 05:54:28.039152 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxts8" event={"ID":"64eb9b0a-7a6b-479c-93ee-118642bac30f","Type":"ContainerStarted","Data":"962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf"} Jan 30 05:54:33 crc kubenswrapper[4931]: I0130 05:54:33.439345 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:33 crc kubenswrapper[4931]: I0130 05:54:33.441142 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:34 crc kubenswrapper[4931]: I0130 05:54:34.504627 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xxts8" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="registry-server" probeResult="failure" output=< Jan 30 05:54:34 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 05:54:34 crc kubenswrapper[4931]: > Jan 30 05:54:43 crc kubenswrapper[4931]: I0130 05:54:43.507126 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:43 crc kubenswrapper[4931]: I0130 05:54:43.547044 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xxts8" podStartSLOduration=17.953757635 podStartE2EDuration="20.547016933s" podCreationTimestamp="2026-01-30 05:54:23 +0000 UTC" firstStartedPulling="2026-01-30 05:54:25.000605722 +0000 UTC m=+2800.370516019" lastFinishedPulling="2026-01-30 05:54:27.59386503 +0000 UTC m=+2802.963775317" observedRunningTime="2026-01-30 05:54:28.073866385 +0000 UTC m=+2803.443776672" watchObservedRunningTime="2026-01-30 05:54:43.547016933 +0000 UTC m=+2818.916927220" Jan 30 05:54:43 crc kubenswrapper[4931]: I0130 05:54:43.590907 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:43 crc kubenswrapper[4931]: I0130 05:54:43.756084 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxts8"] Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.198294 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xxts8" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="registry-server" containerID="cri-o://962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf" gracePeriod=2 Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.662841 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.780309 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-utilities\") pod \"64eb9b0a-7a6b-479c-93ee-118642bac30f\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.780381 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-catalog-content\") pod \"64eb9b0a-7a6b-479c-93ee-118642bac30f\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.780587 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t27j\" (UniqueName: \"kubernetes.io/projected/64eb9b0a-7a6b-479c-93ee-118642bac30f-kube-api-access-7t27j\") pod \"64eb9b0a-7a6b-479c-93ee-118642bac30f\" (UID: \"64eb9b0a-7a6b-479c-93ee-118642bac30f\") " Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.781248 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-utilities" (OuterVolumeSpecName: "utilities") pod "64eb9b0a-7a6b-479c-93ee-118642bac30f" (UID: "64eb9b0a-7a6b-479c-93ee-118642bac30f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.786903 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64eb9b0a-7a6b-479c-93ee-118642bac30f-kube-api-access-7t27j" (OuterVolumeSpecName: "kube-api-access-7t27j") pod "64eb9b0a-7a6b-479c-93ee-118642bac30f" (UID: "64eb9b0a-7a6b-479c-93ee-118642bac30f"). InnerVolumeSpecName "kube-api-access-7t27j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.882701 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t27j\" (UniqueName: \"kubernetes.io/projected/64eb9b0a-7a6b-479c-93ee-118642bac30f-kube-api-access-7t27j\") on node \"crc\" DevicePath \"\"" Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.882757 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.941964 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64eb9b0a-7a6b-479c-93ee-118642bac30f" (UID: "64eb9b0a-7a6b-479c-93ee-118642bac30f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:54:45 crc kubenswrapper[4931]: I0130 05:54:45.984222 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64eb9b0a-7a6b-479c-93ee-118642bac30f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.212594 4931 generic.go:334] "Generic (PLEG): container finished" podID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerID="962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf" exitCode=0 Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.212654 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxts8" event={"ID":"64eb9b0a-7a6b-479c-93ee-118642bac30f","Type":"ContainerDied","Data":"962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf"} Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.212695 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxts8" event={"ID":"64eb9b0a-7a6b-479c-93ee-118642bac30f","Type":"ContainerDied","Data":"5e48aaf71e51dd10317bd589006ab382d0e5ecebbc057cd2b565b569c9c29556"} Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.212723 4931 scope.go:117] "RemoveContainer" containerID="962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.212733 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxts8" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.247869 4931 scope.go:117] "RemoveContainer" containerID="3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.274282 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxts8"] Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.284396 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xxts8"] Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.301687 4931 scope.go:117] "RemoveContainer" containerID="69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.321866 4931 scope.go:117] "RemoveContainer" containerID="962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf" Jan 30 05:54:46 crc kubenswrapper[4931]: E0130 05:54:46.322273 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf\": container with ID starting with 962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf not found: ID does not exist" containerID="962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.322334 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf"} err="failed to get container status \"962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf\": rpc error: code = NotFound desc = could not find container \"962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf\": container with ID starting with 962ef63dffb4cd6b5ff3880e0f6450256c1a450a0d3c3696bc5cd38f94eac1bf not found: ID does not exist" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.322371 4931 scope.go:117] "RemoveContainer" containerID="3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4" Jan 30 05:54:46 crc kubenswrapper[4931]: E0130 05:54:46.323042 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4\": container with ID starting with 3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4 not found: ID does not exist" containerID="3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.323089 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4"} err="failed to get container status \"3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4\": rpc error: code = NotFound desc = could not find container \"3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4\": container with ID starting with 3ab82937c12f6fda05938c880ad10d71218abf1a8f1d6775a183f50c6d7078e4 not found: ID does not exist" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.323118 4931 scope.go:117] "RemoveContainer" containerID="69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3" Jan 30 05:54:46 crc kubenswrapper[4931]: E0130 05:54:46.323687 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3\": container with ID starting with 69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3 not found: ID does not exist" containerID="69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3" Jan 30 05:54:46 crc kubenswrapper[4931]: I0130 05:54:46.323741 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3"} err="failed to get container status \"69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3\": rpc error: code = NotFound desc = could not find container \"69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3\": container with ID starting with 69934b78c3a186dd3ec212cfc8d944537b3fcdbd318da93763df4882959443d3 not found: ID does not exist" Jan 30 05:54:47 crc kubenswrapper[4931]: I0130 05:54:47.437297 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" path="/var/lib/kubelet/pods/64eb9b0a-7a6b-479c-93ee-118642bac30f/volumes" Jan 30 05:54:57 crc kubenswrapper[4931]: I0130 05:54:57.362904 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:54:57 crc kubenswrapper[4931]: I0130 05:54:57.363506 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:55:27 crc kubenswrapper[4931]: I0130 05:55:27.362966 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:55:27 crc kubenswrapper[4931]: I0130 05:55:27.363630 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.362990 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.363680 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.363747 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.364547 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.364641 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" gracePeriod=600 Jan 30 05:55:57 crc kubenswrapper[4931]: E0130 05:55:57.503368 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.873469 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" exitCode=0 Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.873542 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb"} Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.873958 4931 scope.go:117] "RemoveContainer" containerID="3bffa97502e128e6feab5bb4faa23ada0376dcd29bfbd235484da7266eed26ff" Jan 30 05:55:57 crc kubenswrapper[4931]: I0130 05:55:57.874531 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:55:57 crc kubenswrapper[4931]: E0130 05:55:57.874866 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:56:11 crc kubenswrapper[4931]: I0130 05:56:11.423975 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:56:11 crc kubenswrapper[4931]: E0130 05:56:11.425707 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:56:25 crc kubenswrapper[4931]: I0130 05:56:25.431017 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:56:25 crc kubenswrapper[4931]: E0130 05:56:25.432291 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:56:36 crc kubenswrapper[4931]: I0130 05:56:36.422316 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:56:36 crc kubenswrapper[4931]: E0130 05:56:36.423321 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:56:51 crc kubenswrapper[4931]: I0130 05:56:51.423704 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:56:51 crc kubenswrapper[4931]: E0130 05:56:51.425036 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:57:05 crc kubenswrapper[4931]: I0130 05:57:05.423700 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:57:05 crc kubenswrapper[4931]: E0130 05:57:05.424642 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:57:19 crc kubenswrapper[4931]: I0130 05:57:19.422528 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:57:19 crc kubenswrapper[4931]: E0130 05:57:19.423682 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:57:31 crc kubenswrapper[4931]: I0130 05:57:31.422320 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:57:31 crc kubenswrapper[4931]: E0130 05:57:31.423162 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:57:44 crc kubenswrapper[4931]: I0130 05:57:44.422588 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:57:44 crc kubenswrapper[4931]: E0130 05:57:44.423818 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:57:59 crc kubenswrapper[4931]: I0130 05:57:59.422457 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:57:59 crc kubenswrapper[4931]: E0130 05:57:59.424251 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:58:11 crc kubenswrapper[4931]: I0130 05:58:11.423021 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:58:11 crc kubenswrapper[4931]: E0130 05:58:11.424225 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.686090 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gg7gh"] Jan 30 05:58:16 crc kubenswrapper[4931]: E0130 05:58:16.687277 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="extract-utilities" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.687313 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="extract-utilities" Jan 30 05:58:16 crc kubenswrapper[4931]: E0130 05:58:16.687343 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="registry-server" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.687360 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="registry-server" Jan 30 05:58:16 crc kubenswrapper[4931]: E0130 05:58:16.687417 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="extract-content" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.687468 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="extract-content" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.687849 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="64eb9b0a-7a6b-479c-93ee-118642bac30f" containerName="registry-server" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.690710 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.728179 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg7gh"] Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.829702 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wrkm\" (UniqueName: \"kubernetes.io/projected/9e69c7f8-6633-40ec-baf7-33cd56b80526-kube-api-access-9wrkm\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.829758 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-utilities\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.829857 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-catalog-content\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.931526 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-catalog-content\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.931704 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wrkm\" (UniqueName: \"kubernetes.io/projected/9e69c7f8-6633-40ec-baf7-33cd56b80526-kube-api-access-9wrkm\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.931751 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-utilities\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.932138 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-catalog-content\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.932281 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-utilities\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:16 crc kubenswrapper[4931]: I0130 05:58:16.959777 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wrkm\" (UniqueName: \"kubernetes.io/projected/9e69c7f8-6633-40ec-baf7-33cd56b80526-kube-api-access-9wrkm\") pod \"redhat-marketplace-gg7gh\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:17 crc kubenswrapper[4931]: I0130 05:58:17.026268 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:17 crc kubenswrapper[4931]: I0130 05:58:17.495872 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg7gh"] Jan 30 05:58:18 crc kubenswrapper[4931]: I0130 05:58:18.205340 4931 generic.go:334] "Generic (PLEG): container finished" podID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerID="65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f" exitCode=0 Jan 30 05:58:18 crc kubenswrapper[4931]: I0130 05:58:18.205451 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg7gh" event={"ID":"9e69c7f8-6633-40ec-baf7-33cd56b80526","Type":"ContainerDied","Data":"65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f"} Jan 30 05:58:18 crc kubenswrapper[4931]: I0130 05:58:18.205522 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg7gh" event={"ID":"9e69c7f8-6633-40ec-baf7-33cd56b80526","Type":"ContainerStarted","Data":"b6f132ffffa9aa9e1de373451be5e50268b753fc3cd9a4826a9346c118d3b238"} Jan 30 05:58:18 crc kubenswrapper[4931]: I0130 05:58:18.208506 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 05:58:19 crc kubenswrapper[4931]: I0130 05:58:19.213352 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg7gh" event={"ID":"9e69c7f8-6633-40ec-baf7-33cd56b80526","Type":"ContainerStarted","Data":"4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e"} Jan 30 05:58:20 crc kubenswrapper[4931]: I0130 05:58:20.227279 4931 generic.go:334] "Generic (PLEG): container finished" podID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerID="4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e" exitCode=0 Jan 30 05:58:20 crc kubenswrapper[4931]: I0130 05:58:20.227377 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg7gh" event={"ID":"9e69c7f8-6633-40ec-baf7-33cd56b80526","Type":"ContainerDied","Data":"4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e"} Jan 30 05:58:21 crc kubenswrapper[4931]: I0130 05:58:21.241374 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg7gh" event={"ID":"9e69c7f8-6633-40ec-baf7-33cd56b80526","Type":"ContainerStarted","Data":"09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c"} Jan 30 05:58:21 crc kubenswrapper[4931]: I0130 05:58:21.285700 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gg7gh" podStartSLOduration=2.755532109 podStartE2EDuration="5.285664399s" podCreationTimestamp="2026-01-30 05:58:16 +0000 UTC" firstStartedPulling="2026-01-30 05:58:18.207908375 +0000 UTC m=+3033.577818662" lastFinishedPulling="2026-01-30 05:58:20.738040665 +0000 UTC m=+3036.107950952" observedRunningTime="2026-01-30 05:58:21.276096538 +0000 UTC m=+3036.646006805" watchObservedRunningTime="2026-01-30 05:58:21.285664399 +0000 UTC m=+3036.655574706" Jan 30 05:58:24 crc kubenswrapper[4931]: I0130 05:58:24.422507 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:58:24 crc kubenswrapper[4931]: E0130 05:58:24.423525 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:58:27 crc kubenswrapper[4931]: I0130 05:58:27.026650 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:27 crc kubenswrapper[4931]: I0130 05:58:27.026790 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:27 crc kubenswrapper[4931]: I0130 05:58:27.111589 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:27 crc kubenswrapper[4931]: I0130 05:58:27.362753 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:30 crc kubenswrapper[4931]: I0130 05:58:30.667286 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg7gh"] Jan 30 05:58:30 crc kubenswrapper[4931]: I0130 05:58:30.668013 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gg7gh" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="registry-server" containerID="cri-o://09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c" gracePeriod=2 Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.076431 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.267628 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-catalog-content\") pod \"9e69c7f8-6633-40ec-baf7-33cd56b80526\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.268159 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-utilities\") pod \"9e69c7f8-6633-40ec-baf7-33cd56b80526\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.268274 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wrkm\" (UniqueName: \"kubernetes.io/projected/9e69c7f8-6633-40ec-baf7-33cd56b80526-kube-api-access-9wrkm\") pod \"9e69c7f8-6633-40ec-baf7-33cd56b80526\" (UID: \"9e69c7f8-6633-40ec-baf7-33cd56b80526\") " Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.269164 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-utilities" (OuterVolumeSpecName: "utilities") pod "9e69c7f8-6633-40ec-baf7-33cd56b80526" (UID: "9e69c7f8-6633-40ec-baf7-33cd56b80526"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.279574 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e69c7f8-6633-40ec-baf7-33cd56b80526-kube-api-access-9wrkm" (OuterVolumeSpecName: "kube-api-access-9wrkm") pod "9e69c7f8-6633-40ec-baf7-33cd56b80526" (UID: "9e69c7f8-6633-40ec-baf7-33cd56b80526"). InnerVolumeSpecName "kube-api-access-9wrkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.309190 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e69c7f8-6633-40ec-baf7-33cd56b80526" (UID: "9e69c7f8-6633-40ec-baf7-33cd56b80526"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.348096 4931 generic.go:334] "Generic (PLEG): container finished" podID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerID="09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c" exitCode=0 Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.348150 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg7gh" event={"ID":"9e69c7f8-6633-40ec-baf7-33cd56b80526","Type":"ContainerDied","Data":"09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c"} Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.348181 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg7gh" event={"ID":"9e69c7f8-6633-40ec-baf7-33cd56b80526","Type":"ContainerDied","Data":"b6f132ffffa9aa9e1de373451be5e50268b753fc3cd9a4826a9346c118d3b238"} Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.348673 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg7gh" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.348774 4931 scope.go:117] "RemoveContainer" containerID="09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.369676 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wrkm\" (UniqueName: \"kubernetes.io/projected/9e69c7f8-6633-40ec-baf7-33cd56b80526-kube-api-access-9wrkm\") on node \"crc\" DevicePath \"\"" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.369722 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.369742 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e69c7f8-6633-40ec-baf7-33cd56b80526-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.381966 4931 scope.go:117] "RemoveContainer" containerID="4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.389607 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg7gh"] Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.395414 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg7gh"] Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.401466 4931 scope.go:117] "RemoveContainer" containerID="65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.428561 4931 scope.go:117] "RemoveContainer" containerID="09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c" Jan 30 05:58:31 crc kubenswrapper[4931]: E0130 05:58:31.428951 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c\": container with ID starting with 09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c not found: ID does not exist" containerID="09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.429067 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c"} err="failed to get container status \"09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c\": rpc error: code = NotFound desc = could not find container \"09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c\": container with ID starting with 09313b96f5317ca2081492e05aecd5ea8d38b582e37e0b6596d418a718ebe02c not found: ID does not exist" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.429150 4931 scope.go:117] "RemoveContainer" containerID="4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e" Jan 30 05:58:31 crc kubenswrapper[4931]: E0130 05:58:31.429602 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e\": container with ID starting with 4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e not found: ID does not exist" containerID="4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.429677 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e"} err="failed to get container status \"4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e\": rpc error: code = NotFound desc = could not find container \"4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e\": container with ID starting with 4495e4f4bca5085913535b514847ff140fc4fe4bef6afe94338910b4a33b047e not found: ID does not exist" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.429738 4931 scope.go:117] "RemoveContainer" containerID="65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.429818 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" path="/var/lib/kubelet/pods/9e69c7f8-6633-40ec-baf7-33cd56b80526/volumes" Jan 30 05:58:31 crc kubenswrapper[4931]: E0130 05:58:31.430122 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f\": container with ID starting with 65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f not found: ID does not exist" containerID="65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f" Jan 30 05:58:31 crc kubenswrapper[4931]: I0130 05:58:31.430182 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f"} err="failed to get container status \"65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f\": rpc error: code = NotFound desc = could not find container \"65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f\": container with ID starting with 65c5aa46d913497807faa62cda964f5223d6c4d84da21306349092f6cf00db7f not found: ID does not exist" Jan 30 05:58:35 crc kubenswrapper[4931]: I0130 05:58:35.429633 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:58:35 crc kubenswrapper[4931]: E0130 05:58:35.431011 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:58:49 crc kubenswrapper[4931]: I0130 05:58:49.421936 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:58:49 crc kubenswrapper[4931]: E0130 05:58:49.422755 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:59:02 crc kubenswrapper[4931]: I0130 05:59:02.422119 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:59:02 crc kubenswrapper[4931]: E0130 05:59:02.424912 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:59:14 crc kubenswrapper[4931]: I0130 05:59:14.422120 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:59:14 crc kubenswrapper[4931]: E0130 05:59:14.423483 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:59:29 crc kubenswrapper[4931]: I0130 05:59:29.422549 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:59:29 crc kubenswrapper[4931]: E0130 05:59:29.423880 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:59:44 crc kubenswrapper[4931]: I0130 05:59:44.422418 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:59:44 crc kubenswrapper[4931]: E0130 05:59:44.425112 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 05:59:57 crc kubenswrapper[4931]: I0130 05:59:57.422983 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 05:59:57 crc kubenswrapper[4931]: E0130 05:59:57.423991 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.166268 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g"] Jan 30 06:00:00 crc kubenswrapper[4931]: E0130 06:00:00.167304 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="extract-content" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.167323 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="extract-content" Jan 30 06:00:00 crc kubenswrapper[4931]: E0130 06:00:00.167345 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="extract-utilities" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.167362 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="extract-utilities" Jan 30 06:00:00 crc kubenswrapper[4931]: E0130 06:00:00.167395 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="registry-server" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.167403 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="registry-server" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.167628 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e69c7f8-6633-40ec-baf7-33cd56b80526" containerName="registry-server" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.168508 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.171960 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.176649 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.182237 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g"] Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.325927 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-config-volume\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.326006 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgsk\" (UniqueName: \"kubernetes.io/projected/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-kube-api-access-hxgsk\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.326285 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-secret-volume\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.428310 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-secret-volume\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.428525 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-config-volume\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.428560 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgsk\" (UniqueName: \"kubernetes.io/projected/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-kube-api-access-hxgsk\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.430534 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-config-volume\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.437003 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-secret-volume\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.461709 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgsk\" (UniqueName: \"kubernetes.io/projected/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-kube-api-access-hxgsk\") pod \"collect-profiles-29495880-jrm5g\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:00 crc kubenswrapper[4931]: I0130 06:00:00.494600 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:01 crc kubenswrapper[4931]: I0130 06:00:01.047094 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g"] Jan 30 06:00:01 crc kubenswrapper[4931]: I0130 06:00:01.183054 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" event={"ID":"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba","Type":"ContainerStarted","Data":"1940902600dfb612e1c4d9cb34bdb0c6655ed85cfd9760778627a748d0b9b96e"} Jan 30 06:00:02 crc kubenswrapper[4931]: I0130 06:00:02.194121 4931 generic.go:334] "Generic (PLEG): container finished" podID="ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" containerID="18c71eb1241272ed04cbfce337c51a3320bfd0991c28ac36edc8dd0665668963" exitCode=0 Jan 30 06:00:02 crc kubenswrapper[4931]: I0130 06:00:02.194198 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" event={"ID":"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba","Type":"ContainerDied","Data":"18c71eb1241272ed04cbfce337c51a3320bfd0991c28ac36edc8dd0665668963"} Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.533186 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.582383 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-secret-volume\") pod \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.582508 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-config-volume\") pod \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.582656 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxgsk\" (UniqueName: \"kubernetes.io/projected/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-kube-api-access-hxgsk\") pod \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\" (UID: \"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba\") " Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.600341 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" (UID: "ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.601190 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" (UID: "ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.601309 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-kube-api-access-hxgsk" (OuterVolumeSpecName: "kube-api-access-hxgsk") pod "ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" (UID: "ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba"). InnerVolumeSpecName "kube-api-access-hxgsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.685475 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxgsk\" (UniqueName: \"kubernetes.io/projected/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-kube-api-access-hxgsk\") on node \"crc\" DevicePath \"\"" Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.685536 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:00:03 crc kubenswrapper[4931]: I0130 06:00:03.685560 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:00:04 crc kubenswrapper[4931]: I0130 06:00:04.215315 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" event={"ID":"ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba","Type":"ContainerDied","Data":"1940902600dfb612e1c4d9cb34bdb0c6655ed85cfd9760778627a748d0b9b96e"} Jan 30 06:00:04 crc kubenswrapper[4931]: I0130 06:00:04.215371 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1940902600dfb612e1c4d9cb34bdb0c6655ed85cfd9760778627a748d0b9b96e" Jan 30 06:00:04 crc kubenswrapper[4931]: I0130 06:00:04.215416 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g" Jan 30 06:00:04 crc kubenswrapper[4931]: I0130 06:00:04.633354 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb"] Jan 30 06:00:04 crc kubenswrapper[4931]: I0130 06:00:04.639207 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495835-kk9gb"] Jan 30 06:00:05 crc kubenswrapper[4931]: I0130 06:00:05.436848 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2119e7a8-c484-4aef-ac04-c3f82433738d" path="/var/lib/kubelet/pods/2119e7a8-c484-4aef-ac04-c3f82433738d/volumes" Jan 30 06:00:09 crc kubenswrapper[4931]: I0130 06:00:09.422779 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 06:00:09 crc kubenswrapper[4931]: E0130 06:00:09.423486 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:00:22 crc kubenswrapper[4931]: I0130 06:00:22.422500 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 06:00:22 crc kubenswrapper[4931]: E0130 06:00:22.424048 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:00:35 crc kubenswrapper[4931]: I0130 06:00:35.430650 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 06:00:35 crc kubenswrapper[4931]: E0130 06:00:35.431553 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:00:48 crc kubenswrapper[4931]: I0130 06:00:48.422877 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 06:00:48 crc kubenswrapper[4931]: E0130 06:00:48.424109 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:00:52 crc kubenswrapper[4931]: I0130 06:00:52.694964 4931 scope.go:117] "RemoveContainer" containerID="93024ef1482e0faf5c83b31d25bb0153752fe08f7d8619cc6cdb7d2120e5e084" Jan 30 06:01:00 crc kubenswrapper[4931]: I0130 06:01:00.422787 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 06:01:00 crc kubenswrapper[4931]: I0130 06:01:00.715491 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"3ee39d7345786d63837440f7d9f37384973d255b777a319f3ba033acd5419f51"} Jan 30 06:03:27 crc kubenswrapper[4931]: I0130 06:03:27.363389 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:03:27 crc kubenswrapper[4931]: I0130 06:03:27.364292 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:03:57 crc kubenswrapper[4931]: I0130 06:03:57.362710 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:03:57 crc kubenswrapper[4931]: I0130 06:03:57.363528 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.362922 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.364695 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.364877 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.365887 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ee39d7345786d63837440f7d9f37384973d255b777a319f3ba033acd5419f51"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.366000 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://3ee39d7345786d63837440f7d9f37384973d255b777a319f3ba033acd5419f51" gracePeriod=600 Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.553437 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="3ee39d7345786d63837440f7d9f37384973d255b777a319f3ba033acd5419f51" exitCode=0 Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.553626 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"3ee39d7345786d63837440f7d9f37384973d255b777a319f3ba033acd5419f51"} Jan 30 06:04:27 crc kubenswrapper[4931]: I0130 06:04:27.553772 4931 scope.go:117] "RemoveContainer" containerID="f2f73de32d7d73d5b5bb194607c20eb568ac797bbaa7bad33c857b1a456369eb" Jan 30 06:04:28 crc kubenswrapper[4931]: I0130 06:04:28.566543 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b"} Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.364416 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mwvw5"] Jan 30 06:04:34 crc kubenswrapper[4931]: E0130 06:04:34.365653 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" containerName="collect-profiles" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.365678 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" containerName="collect-profiles" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.366016 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" containerName="collect-profiles" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.369406 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.377650 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwvw5"] Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.447396 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-catalog-content\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.447589 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-utilities\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.447641 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdxcb\" (UniqueName: \"kubernetes.io/projected/02c97a92-5bac-414e-ba28-8558cb9dbd96-kube-api-access-jdxcb\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.549472 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-utilities\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.549549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdxcb\" (UniqueName: \"kubernetes.io/projected/02c97a92-5bac-414e-ba28-8558cb9dbd96-kube-api-access-jdxcb\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.549813 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-catalog-content\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.550754 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-catalog-content\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.550872 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-utilities\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.586979 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdxcb\" (UniqueName: \"kubernetes.io/projected/02c97a92-5bac-414e-ba28-8558cb9dbd96-kube-api-access-jdxcb\") pod \"certified-operators-mwvw5\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:34 crc kubenswrapper[4931]: I0130 06:04:34.762122 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:35 crc kubenswrapper[4931]: I0130 06:04:35.225641 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwvw5"] Jan 30 06:04:35 crc kubenswrapper[4931]: I0130 06:04:35.629405 4931 generic.go:334] "Generic (PLEG): container finished" podID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerID="9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5" exitCode=0 Jan 30 06:04:35 crc kubenswrapper[4931]: I0130 06:04:35.629491 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwvw5" event={"ID":"02c97a92-5bac-414e-ba28-8558cb9dbd96","Type":"ContainerDied","Data":"9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5"} Jan 30 06:04:35 crc kubenswrapper[4931]: I0130 06:04:35.629796 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwvw5" event={"ID":"02c97a92-5bac-414e-ba28-8558cb9dbd96","Type":"ContainerStarted","Data":"a63c24b05c14a83e6af87250f90141d89ab441e8933bbe43b0ae52740900c283"} Jan 30 06:04:35 crc kubenswrapper[4931]: I0130 06:04:35.631740 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:04:36 crc kubenswrapper[4931]: I0130 06:04:36.649328 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwvw5" event={"ID":"02c97a92-5bac-414e-ba28-8558cb9dbd96","Type":"ContainerStarted","Data":"fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a"} Jan 30 06:04:37 crc kubenswrapper[4931]: I0130 06:04:37.661406 4931 generic.go:334] "Generic (PLEG): container finished" podID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerID="fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a" exitCode=0 Jan 30 06:04:37 crc kubenswrapper[4931]: I0130 06:04:37.661538 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwvw5" event={"ID":"02c97a92-5bac-414e-ba28-8558cb9dbd96","Type":"ContainerDied","Data":"fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a"} Jan 30 06:04:38 crc kubenswrapper[4931]: I0130 06:04:38.675514 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwvw5" event={"ID":"02c97a92-5bac-414e-ba28-8558cb9dbd96","Type":"ContainerStarted","Data":"feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140"} Jan 30 06:04:38 crc kubenswrapper[4931]: I0130 06:04:38.709327 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mwvw5" podStartSLOduration=2.291144972 podStartE2EDuration="4.709302413s" podCreationTimestamp="2026-01-30 06:04:34 +0000 UTC" firstStartedPulling="2026-01-30 06:04:35.631115279 +0000 UTC m=+3411.001025566" lastFinishedPulling="2026-01-30 06:04:38.04927274 +0000 UTC m=+3413.419183007" observedRunningTime="2026-01-30 06:04:38.706306388 +0000 UTC m=+3414.076216675" watchObservedRunningTime="2026-01-30 06:04:38.709302413 +0000 UTC m=+3414.079212710" Jan 30 06:04:44 crc kubenswrapper[4931]: I0130 06:04:44.763210 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:44 crc kubenswrapper[4931]: I0130 06:04:44.763639 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:44 crc kubenswrapper[4931]: I0130 06:04:44.834515 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:45 crc kubenswrapper[4931]: I0130 06:04:45.818155 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:45 crc kubenswrapper[4931]: I0130 06:04:45.918182 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwvw5"] Jan 30 06:04:47 crc kubenswrapper[4931]: I0130 06:04:47.762815 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mwvw5" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="registry-server" containerID="cri-o://feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140" gracePeriod=2 Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.271569 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.451218 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-utilities\") pod \"02c97a92-5bac-414e-ba28-8558cb9dbd96\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.451275 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdxcb\" (UniqueName: \"kubernetes.io/projected/02c97a92-5bac-414e-ba28-8558cb9dbd96-kube-api-access-jdxcb\") pod \"02c97a92-5bac-414e-ba28-8558cb9dbd96\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.451402 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-catalog-content\") pod \"02c97a92-5bac-414e-ba28-8558cb9dbd96\" (UID: \"02c97a92-5bac-414e-ba28-8558cb9dbd96\") " Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.452285 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-utilities" (OuterVolumeSpecName: "utilities") pod "02c97a92-5bac-414e-ba28-8558cb9dbd96" (UID: "02c97a92-5bac-414e-ba28-8558cb9dbd96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.452601 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.460084 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c97a92-5bac-414e-ba28-8558cb9dbd96-kube-api-access-jdxcb" (OuterVolumeSpecName: "kube-api-access-jdxcb") pod "02c97a92-5bac-414e-ba28-8558cb9dbd96" (UID: "02c97a92-5bac-414e-ba28-8558cb9dbd96"). InnerVolumeSpecName "kube-api-access-jdxcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.496703 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c97a92-5bac-414e-ba28-8558cb9dbd96" (UID: "02c97a92-5bac-414e-ba28-8558cb9dbd96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.554794 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdxcb\" (UniqueName: \"kubernetes.io/projected/02c97a92-5bac-414e-ba28-8558cb9dbd96-kube-api-access-jdxcb\") on node \"crc\" DevicePath \"\"" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.554854 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c97a92-5bac-414e-ba28-8558cb9dbd96-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.775694 4931 generic.go:334] "Generic (PLEG): container finished" podID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerID="feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140" exitCode=0 Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.775757 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwvw5" event={"ID":"02c97a92-5bac-414e-ba28-8558cb9dbd96","Type":"ContainerDied","Data":"feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140"} Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.775792 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwvw5" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.775824 4931 scope.go:117] "RemoveContainer" containerID="feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.775806 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwvw5" event={"ID":"02c97a92-5bac-414e-ba28-8558cb9dbd96","Type":"ContainerDied","Data":"a63c24b05c14a83e6af87250f90141d89ab441e8933bbe43b0ae52740900c283"} Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.808684 4931 scope.go:117] "RemoveContainer" containerID="fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.830318 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwvw5"] Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.842472 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mwvw5"] Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.857153 4931 scope.go:117] "RemoveContainer" containerID="9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.889768 4931 scope.go:117] "RemoveContainer" containerID="feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140" Jan 30 06:04:48 crc kubenswrapper[4931]: E0130 06:04:48.890524 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140\": container with ID starting with feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140 not found: ID does not exist" containerID="feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.890649 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140"} err="failed to get container status \"feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140\": rpc error: code = NotFound desc = could not find container \"feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140\": container with ID starting with feed4d6c848880fdc6fb49613c7c3962a5ae58756d94d295ec9e57e233d4e140 not found: ID does not exist" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.890785 4931 scope.go:117] "RemoveContainer" containerID="fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a" Jan 30 06:04:48 crc kubenswrapper[4931]: E0130 06:04:48.891419 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a\": container with ID starting with fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a not found: ID does not exist" containerID="fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.891540 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a"} err="failed to get container status \"fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a\": rpc error: code = NotFound desc = could not find container \"fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a\": container with ID starting with fd726bf9b73f80d9c6177c813cc0c940b41c35ad864d34170569c019c1c3d14a not found: ID does not exist" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.891581 4931 scope.go:117] "RemoveContainer" containerID="9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5" Jan 30 06:04:48 crc kubenswrapper[4931]: E0130 06:04:48.892105 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5\": container with ID starting with 9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5 not found: ID does not exist" containerID="9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5" Jan 30 06:04:48 crc kubenswrapper[4931]: I0130 06:04:48.892148 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5"} err="failed to get container status \"9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5\": rpc error: code = NotFound desc = could not find container \"9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5\": container with ID starting with 9891d825719792c8ef2ff6a45a990cfc6e2bdb363833ded55f9e3be7ec2562e5 not found: ID does not exist" Jan 30 06:04:49 crc kubenswrapper[4931]: I0130 06:04:49.437895 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" path="/var/lib/kubelet/pods/02c97a92-5bac-414e-ba28-8558cb9dbd96/volumes" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.003120 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-llckc"] Jan 30 06:05:53 crc kubenswrapper[4931]: E0130 06:05:53.004519 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="extract-content" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.004549 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="extract-content" Jan 30 06:05:53 crc kubenswrapper[4931]: E0130 06:05:53.004585 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="registry-server" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.004601 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="registry-server" Jan 30 06:05:53 crc kubenswrapper[4931]: E0130 06:05:53.004633 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="extract-utilities" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.004649 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="extract-utilities" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.004952 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c97a92-5bac-414e-ba28-8558cb9dbd96" containerName="registry-server" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.009575 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.038802 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llckc"] Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.152991 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-utilities\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.153251 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtth6\" (UniqueName: \"kubernetes.io/projected/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-kube-api-access-rtth6\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.153325 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-catalog-content\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.255052 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtth6\" (UniqueName: \"kubernetes.io/projected/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-kube-api-access-rtth6\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.255116 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-catalog-content\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.255178 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-utilities\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.256006 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-catalog-content\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.256191 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-utilities\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.277960 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtth6\" (UniqueName: \"kubernetes.io/projected/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-kube-api-access-rtth6\") pod \"redhat-operators-llckc\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.339102 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:05:53 crc kubenswrapper[4931]: I0130 06:05:53.824296 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llckc"] Jan 30 06:05:54 crc kubenswrapper[4931]: I0130 06:05:54.423533 4931 generic.go:334] "Generic (PLEG): container finished" podID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerID="8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a" exitCode=0 Jan 30 06:05:54 crc kubenswrapper[4931]: I0130 06:05:54.423599 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llckc" event={"ID":"c0d383f8-74f2-49c4-8586-1c0420ec4d5f","Type":"ContainerDied","Data":"8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a"} Jan 30 06:05:54 crc kubenswrapper[4931]: I0130 06:05:54.423641 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llckc" event={"ID":"c0d383f8-74f2-49c4-8586-1c0420ec4d5f","Type":"ContainerStarted","Data":"d5d707b25abaf91ebe43fe475bc7a9765c36e250e9a1cf1650be37aaaf0e7b24"} Jan 30 06:05:55 crc kubenswrapper[4931]: I0130 06:05:55.442506 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llckc" event={"ID":"c0d383f8-74f2-49c4-8586-1c0420ec4d5f","Type":"ContainerStarted","Data":"777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153"} Jan 30 06:05:56 crc kubenswrapper[4931]: I0130 06:05:56.452628 4931 generic.go:334] "Generic (PLEG): container finished" podID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerID="777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153" exitCode=0 Jan 30 06:05:56 crc kubenswrapper[4931]: I0130 06:05:56.452703 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llckc" event={"ID":"c0d383f8-74f2-49c4-8586-1c0420ec4d5f","Type":"ContainerDied","Data":"777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153"} Jan 30 06:05:57 crc kubenswrapper[4931]: I0130 06:05:57.461302 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llckc" event={"ID":"c0d383f8-74f2-49c4-8586-1c0420ec4d5f","Type":"ContainerStarted","Data":"5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5"} Jan 30 06:05:57 crc kubenswrapper[4931]: I0130 06:05:57.495053 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-llckc" podStartSLOduration=3.044566233 podStartE2EDuration="5.495025588s" podCreationTimestamp="2026-01-30 06:05:52 +0000 UTC" firstStartedPulling="2026-01-30 06:05:54.425458278 +0000 UTC m=+3489.795368545" lastFinishedPulling="2026-01-30 06:05:56.875917603 +0000 UTC m=+3492.245827900" observedRunningTime="2026-01-30 06:05:57.485762906 +0000 UTC m=+3492.855673153" watchObservedRunningTime="2026-01-30 06:05:57.495025588 +0000 UTC m=+3492.864935895" Jan 30 06:06:03 crc kubenswrapper[4931]: I0130 06:06:03.340024 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:06:03 crc kubenswrapper[4931]: I0130 06:06:03.340648 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:06:04 crc kubenswrapper[4931]: I0130 06:06:04.415148 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-llckc" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="registry-server" probeResult="failure" output=< Jan 30 06:06:04 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 06:06:04 crc kubenswrapper[4931]: > Jan 30 06:06:13 crc kubenswrapper[4931]: I0130 06:06:13.413522 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:06:13 crc kubenswrapper[4931]: I0130 06:06:13.485917 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:06:13 crc kubenswrapper[4931]: I0130 06:06:13.665144 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-llckc"] Jan 30 06:06:14 crc kubenswrapper[4931]: I0130 06:06:14.601407 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-llckc" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="registry-server" containerID="cri-o://5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5" gracePeriod=2 Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.149722 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.181657 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtth6\" (UniqueName: \"kubernetes.io/projected/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-kube-api-access-rtth6\") pod \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.181809 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-catalog-content\") pod \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.181867 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-utilities\") pod \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\" (UID: \"c0d383f8-74f2-49c4-8586-1c0420ec4d5f\") " Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.182563 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-utilities" (OuterVolumeSpecName: "utilities") pod "c0d383f8-74f2-49c4-8586-1c0420ec4d5f" (UID: "c0d383f8-74f2-49c4-8586-1c0420ec4d5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.186602 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-kube-api-access-rtth6" (OuterVolumeSpecName: "kube-api-access-rtth6") pod "c0d383f8-74f2-49c4-8586-1c0420ec4d5f" (UID: "c0d383f8-74f2-49c4-8586-1c0420ec4d5f"). InnerVolumeSpecName "kube-api-access-rtth6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.283538 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.283569 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtth6\" (UniqueName: \"kubernetes.io/projected/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-kube-api-access-rtth6\") on node \"crc\" DevicePath \"\"" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.310677 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0d383f8-74f2-49c4-8586-1c0420ec4d5f" (UID: "c0d383f8-74f2-49c4-8586-1c0420ec4d5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.385806 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d383f8-74f2-49c4-8586-1c0420ec4d5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.613876 4931 generic.go:334] "Generic (PLEG): container finished" podID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerID="5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5" exitCode=0 Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.613993 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llckc" event={"ID":"c0d383f8-74f2-49c4-8586-1c0420ec4d5f","Type":"ContainerDied","Data":"5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5"} Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.614562 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llckc" event={"ID":"c0d383f8-74f2-49c4-8586-1c0420ec4d5f","Type":"ContainerDied","Data":"d5d707b25abaf91ebe43fe475bc7a9765c36e250e9a1cf1650be37aaaf0e7b24"} Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.614594 4931 scope.go:117] "RemoveContainer" containerID="5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.613964 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llckc" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.643649 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-llckc"] Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.653998 4931 scope.go:117] "RemoveContainer" containerID="777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.654698 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-llckc"] Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.684014 4931 scope.go:117] "RemoveContainer" containerID="8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.725216 4931 scope.go:117] "RemoveContainer" containerID="5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5" Jan 30 06:06:15 crc kubenswrapper[4931]: E0130 06:06:15.725827 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5\": container with ID starting with 5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5 not found: ID does not exist" containerID="5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.725875 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5"} err="failed to get container status \"5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5\": rpc error: code = NotFound desc = could not find container \"5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5\": container with ID starting with 5506d6cbe6111ed556ba1975bcde73216453214b22d4fe8608535a9935424bd5 not found: ID does not exist" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.725910 4931 scope.go:117] "RemoveContainer" containerID="777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153" Jan 30 06:06:15 crc kubenswrapper[4931]: E0130 06:06:15.726540 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153\": container with ID starting with 777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153 not found: ID does not exist" containerID="777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.726586 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153"} err="failed to get container status \"777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153\": rpc error: code = NotFound desc = could not find container \"777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153\": container with ID starting with 777ae7236efc1d41bc64eedf60b129a8e00c73c9050cac2e9347a487b87bd153 not found: ID does not exist" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.726674 4931 scope.go:117] "RemoveContainer" containerID="8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a" Jan 30 06:06:15 crc kubenswrapper[4931]: E0130 06:06:15.727129 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a\": container with ID starting with 8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a not found: ID does not exist" containerID="8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a" Jan 30 06:06:15 crc kubenswrapper[4931]: I0130 06:06:15.727163 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a"} err="failed to get container status \"8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a\": rpc error: code = NotFound desc = could not find container \"8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a\": container with ID starting with 8ad68add48f759e5bde99e4704165c1ff41188336b12af217effa58c26ea1e3a not found: ID does not exist" Jan 30 06:06:17 crc kubenswrapper[4931]: I0130 06:06:17.438493 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" path="/var/lib/kubelet/pods/c0d383f8-74f2-49c4-8586-1c0420ec4d5f/volumes" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.069370 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j8jt4"] Jan 30 06:06:20 crc kubenswrapper[4931]: E0130 06:06:20.069930 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="registry-server" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.069943 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="registry-server" Jan 30 06:06:20 crc kubenswrapper[4931]: E0130 06:06:20.069954 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="extract-utilities" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.069960 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="extract-utilities" Jan 30 06:06:20 crc kubenswrapper[4931]: E0130 06:06:20.069976 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="extract-content" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.069984 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="extract-content" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.070121 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d383f8-74f2-49c4-8586-1c0420ec4d5f" containerName="registry-server" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.071193 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.086093 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8jt4"] Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.255411 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4ww\" (UniqueName: \"kubernetes.io/projected/da8ef012-9169-4a7f-9a5f-089f037767cb-kube-api-access-dk4ww\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.255488 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-catalog-content\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.255800 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-utilities\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.364433 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4ww\" (UniqueName: \"kubernetes.io/projected/da8ef012-9169-4a7f-9a5f-089f037767cb-kube-api-access-dk4ww\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.364501 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-catalog-content\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.364518 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-utilities\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.364920 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-utilities\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.365285 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-catalog-content\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.396695 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4ww\" (UniqueName: \"kubernetes.io/projected/da8ef012-9169-4a7f-9a5f-089f037767cb-kube-api-access-dk4ww\") pod \"community-operators-j8jt4\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.403071 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.623319 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8jt4"] Jan 30 06:06:20 crc kubenswrapper[4931]: I0130 06:06:20.665872 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jt4" event={"ID":"da8ef012-9169-4a7f-9a5f-089f037767cb","Type":"ContainerStarted","Data":"ee74a72d8120c634d75689bf443743d2d71166952e679e589aec7096851cd440"} Jan 30 06:06:21 crc kubenswrapper[4931]: I0130 06:06:21.678258 4931 generic.go:334] "Generic (PLEG): container finished" podID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerID="e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da" exitCode=0 Jan 30 06:06:21 crc kubenswrapper[4931]: I0130 06:06:21.678325 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jt4" event={"ID":"da8ef012-9169-4a7f-9a5f-089f037767cb","Type":"ContainerDied","Data":"e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da"} Jan 30 06:06:23 crc kubenswrapper[4931]: I0130 06:06:23.698177 4931 generic.go:334] "Generic (PLEG): container finished" podID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerID="f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d" exitCode=0 Jan 30 06:06:23 crc kubenswrapper[4931]: I0130 06:06:23.698253 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jt4" event={"ID":"da8ef012-9169-4a7f-9a5f-089f037767cb","Type":"ContainerDied","Data":"f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d"} Jan 30 06:06:24 crc kubenswrapper[4931]: I0130 06:06:24.711193 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jt4" event={"ID":"da8ef012-9169-4a7f-9a5f-089f037767cb","Type":"ContainerStarted","Data":"d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f"} Jan 30 06:06:24 crc kubenswrapper[4931]: I0130 06:06:24.746865 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j8jt4" podStartSLOduration=2.313809279 podStartE2EDuration="4.746838432s" podCreationTimestamp="2026-01-30 06:06:20 +0000 UTC" firstStartedPulling="2026-01-30 06:06:21.681437638 +0000 UTC m=+3517.051347925" lastFinishedPulling="2026-01-30 06:06:24.114466811 +0000 UTC m=+3519.484377078" observedRunningTime="2026-01-30 06:06:24.736746956 +0000 UTC m=+3520.106657243" watchObservedRunningTime="2026-01-30 06:06:24.746838432 +0000 UTC m=+3520.116748729" Jan 30 06:06:27 crc kubenswrapper[4931]: I0130 06:06:27.362749 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:06:27 crc kubenswrapper[4931]: I0130 06:06:27.362808 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:06:30 crc kubenswrapper[4931]: I0130 06:06:30.404521 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:30 crc kubenswrapper[4931]: I0130 06:06:30.404882 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:30 crc kubenswrapper[4931]: I0130 06:06:30.463976 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:30 crc kubenswrapper[4931]: I0130 06:06:30.823478 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:30 crc kubenswrapper[4931]: I0130 06:06:30.876350 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8jt4"] Jan 30 06:06:32 crc kubenswrapper[4931]: I0130 06:06:32.780347 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j8jt4" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="registry-server" containerID="cri-o://d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f" gracePeriod=2 Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.312979 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.432153 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-utilities\") pod \"da8ef012-9169-4a7f-9a5f-089f037767cb\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.432218 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-catalog-content\") pod \"da8ef012-9169-4a7f-9a5f-089f037767cb\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.432466 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4ww\" (UniqueName: \"kubernetes.io/projected/da8ef012-9169-4a7f-9a5f-089f037767cb-kube-api-access-dk4ww\") pod \"da8ef012-9169-4a7f-9a5f-089f037767cb\" (UID: \"da8ef012-9169-4a7f-9a5f-089f037767cb\") " Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.434516 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-utilities" (OuterVolumeSpecName: "utilities") pod "da8ef012-9169-4a7f-9a5f-089f037767cb" (UID: "da8ef012-9169-4a7f-9a5f-089f037767cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.434761 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.441923 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8ef012-9169-4a7f-9a5f-089f037767cb-kube-api-access-dk4ww" (OuterVolumeSpecName: "kube-api-access-dk4ww") pod "da8ef012-9169-4a7f-9a5f-089f037767cb" (UID: "da8ef012-9169-4a7f-9a5f-089f037767cb"). InnerVolumeSpecName "kube-api-access-dk4ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.492165 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da8ef012-9169-4a7f-9a5f-089f037767cb" (UID: "da8ef012-9169-4a7f-9a5f-089f037767cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.536351 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8ef012-9169-4a7f-9a5f-089f037767cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.536403 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4ww\" (UniqueName: \"kubernetes.io/projected/da8ef012-9169-4a7f-9a5f-089f037767cb-kube-api-access-dk4ww\") on node \"crc\" DevicePath \"\"" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.791710 4931 generic.go:334] "Generic (PLEG): container finished" podID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerID="d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f" exitCode=0 Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.791757 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jt4" event={"ID":"da8ef012-9169-4a7f-9a5f-089f037767cb","Type":"ContainerDied","Data":"d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f"} Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.791795 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jt4" event={"ID":"da8ef012-9169-4a7f-9a5f-089f037767cb","Type":"ContainerDied","Data":"ee74a72d8120c634d75689bf443743d2d71166952e679e589aec7096851cd440"} Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.791817 4931 scope.go:117] "RemoveContainer" containerID="d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.791852 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jt4" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.822165 4931 scope.go:117] "RemoveContainer" containerID="f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.847749 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8jt4"] Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.855849 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j8jt4"] Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.865114 4931 scope.go:117] "RemoveContainer" containerID="e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.911082 4931 scope.go:117] "RemoveContainer" containerID="d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f" Jan 30 06:06:33 crc kubenswrapper[4931]: E0130 06:06:33.911796 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f\": container with ID starting with d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f not found: ID does not exist" containerID="d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.911854 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f"} err="failed to get container status \"d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f\": rpc error: code = NotFound desc = could not find container \"d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f\": container with ID starting with d5ae3d0c435be23282c47a6f4101d591c13d393322c4cb16f5a352737782a19f not found: ID does not exist" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.911984 4931 scope.go:117] "RemoveContainer" containerID="f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d" Jan 30 06:06:33 crc kubenswrapper[4931]: E0130 06:06:33.912409 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d\": container with ID starting with f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d not found: ID does not exist" containerID="f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.912497 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d"} err="failed to get container status \"f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d\": rpc error: code = NotFound desc = could not find container \"f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d\": container with ID starting with f6708eb42c892ea20b0cb1810bd6b68cdb5921ffd71de7b46668ec865179c12d not found: ID does not exist" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.912528 4931 scope.go:117] "RemoveContainer" containerID="e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da" Jan 30 06:06:33 crc kubenswrapper[4931]: E0130 06:06:33.913195 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da\": container with ID starting with e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da not found: ID does not exist" containerID="e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da" Jan 30 06:06:33 crc kubenswrapper[4931]: I0130 06:06:33.913243 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da"} err="failed to get container status \"e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da\": rpc error: code = NotFound desc = could not find container \"e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da\": container with ID starting with e0a99bf4b444fc11507f43c2e6824e602775ea1a814158f6d39bc9e5bf8b54da not found: ID does not exist" Jan 30 06:06:35 crc kubenswrapper[4931]: I0130 06:06:35.440282 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" path="/var/lib/kubelet/pods/da8ef012-9169-4a7f-9a5f-089f037767cb/volumes" Jan 30 06:06:57 crc kubenswrapper[4931]: I0130 06:06:57.362929 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:06:57 crc kubenswrapper[4931]: I0130 06:06:57.363663 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:07:27 crc kubenswrapper[4931]: I0130 06:07:27.362818 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:07:27 crc kubenswrapper[4931]: I0130 06:07:27.363562 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:07:27 crc kubenswrapper[4931]: I0130 06:07:27.363644 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:07:27 crc kubenswrapper[4931]: I0130 06:07:27.364707 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:07:27 crc kubenswrapper[4931]: I0130 06:07:27.364814 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" gracePeriod=600 Jan 30 06:07:27 crc kubenswrapper[4931]: E0130 06:07:27.493861 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:07:28 crc kubenswrapper[4931]: I0130 06:07:28.319243 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" exitCode=0 Jan 30 06:07:28 crc kubenswrapper[4931]: I0130 06:07:28.319308 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b"} Jan 30 06:07:28 crc kubenswrapper[4931]: I0130 06:07:28.319727 4931 scope.go:117] "RemoveContainer" containerID="3ee39d7345786d63837440f7d9f37384973d255b777a319f3ba033acd5419f51" Jan 30 06:07:28 crc kubenswrapper[4931]: I0130 06:07:28.320500 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:07:28 crc kubenswrapper[4931]: E0130 06:07:28.320893 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:07:42 crc kubenswrapper[4931]: I0130 06:07:42.422227 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:07:42 crc kubenswrapper[4931]: E0130 06:07:42.423313 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:07:56 crc kubenswrapper[4931]: I0130 06:07:56.422116 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:07:56 crc kubenswrapper[4931]: E0130 06:07:56.423119 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:08:10 crc kubenswrapper[4931]: I0130 06:08:10.422783 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:08:10 crc kubenswrapper[4931]: E0130 06:08:10.423864 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:08:25 crc kubenswrapper[4931]: I0130 06:08:25.429408 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:08:25 crc kubenswrapper[4931]: E0130 06:08:25.430838 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:08:40 crc kubenswrapper[4931]: I0130 06:08:40.421711 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:08:40 crc kubenswrapper[4931]: E0130 06:08:40.423044 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:08:51 crc kubenswrapper[4931]: I0130 06:08:51.422140 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:08:51 crc kubenswrapper[4931]: E0130 06:08:51.425678 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:09:04 crc kubenswrapper[4931]: I0130 06:09:04.422254 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:09:04 crc kubenswrapper[4931]: E0130 06:09:04.423332 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:09:17 crc kubenswrapper[4931]: I0130 06:09:17.438257 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:09:17 crc kubenswrapper[4931]: E0130 06:09:17.439242 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:09:28 crc kubenswrapper[4931]: I0130 06:09:28.422245 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:09:28 crc kubenswrapper[4931]: E0130 06:09:28.423481 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:09:43 crc kubenswrapper[4931]: I0130 06:09:43.423091 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:09:43 crc kubenswrapper[4931]: E0130 06:09:43.423851 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:09:54 crc kubenswrapper[4931]: I0130 06:09:54.422571 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:09:54 crc kubenswrapper[4931]: E0130 06:09:54.425322 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:10:09 crc kubenswrapper[4931]: I0130 06:10:09.423069 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:10:09 crc kubenswrapper[4931]: E0130 06:10:09.425050 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:10:24 crc kubenswrapper[4931]: I0130 06:10:24.422790 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:10:24 crc kubenswrapper[4931]: E0130 06:10:24.424175 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:10:35 crc kubenswrapper[4931]: I0130 06:10:35.431808 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:10:35 crc kubenswrapper[4931]: E0130 06:10:35.448058 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:10:48 crc kubenswrapper[4931]: I0130 06:10:48.423173 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:10:48 crc kubenswrapper[4931]: E0130 06:10:48.424486 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:11:02 crc kubenswrapper[4931]: I0130 06:11:02.421676 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:11:02 crc kubenswrapper[4931]: E0130 06:11:02.422787 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:11:15 crc kubenswrapper[4931]: I0130 06:11:15.429835 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:11:15 crc kubenswrapper[4931]: E0130 06:11:15.431000 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:11:29 crc kubenswrapper[4931]: I0130 06:11:29.422553 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:11:29 crc kubenswrapper[4931]: E0130 06:11:29.423947 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:11:40 crc kubenswrapper[4931]: I0130 06:11:40.422703 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:11:40 crc kubenswrapper[4931]: E0130 06:11:40.423667 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:11:54 crc kubenswrapper[4931]: I0130 06:11:54.422018 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:11:54 crc kubenswrapper[4931]: E0130 06:11:54.423128 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:12:09 crc kubenswrapper[4931]: I0130 06:12:09.423312 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:12:09 crc kubenswrapper[4931]: E0130 06:12:09.424959 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:12:24 crc kubenswrapper[4931]: I0130 06:12:24.422046 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:12:24 crc kubenswrapper[4931]: E0130 06:12:24.423111 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:12:35 crc kubenswrapper[4931]: I0130 06:12:35.431855 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:12:36 crc kubenswrapper[4931]: I0130 06:12:36.326641 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"3235141ead611a0f7e3893625153168342d0a69d19fff5669f6dde20d2fc1d8b"} Jan 30 06:13:19 crc kubenswrapper[4931]: I0130 06:13:19.908325 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6226b"] Jan 30 06:13:19 crc kubenswrapper[4931]: E0130 06:13:19.909443 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="registry-server" Jan 30 06:13:19 crc kubenswrapper[4931]: I0130 06:13:19.909464 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="registry-server" Jan 30 06:13:19 crc kubenswrapper[4931]: E0130 06:13:19.909497 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="extract-content" Jan 30 06:13:19 crc kubenswrapper[4931]: I0130 06:13:19.909511 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="extract-content" Jan 30 06:13:19 crc kubenswrapper[4931]: E0130 06:13:19.909554 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="extract-utilities" Jan 30 06:13:19 crc kubenswrapper[4931]: I0130 06:13:19.909567 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="extract-utilities" Jan 30 06:13:19 crc kubenswrapper[4931]: I0130 06:13:19.909804 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8ef012-9169-4a7f-9a5f-089f037767cb" containerName="registry-server" Jan 30 06:13:19 crc kubenswrapper[4931]: I0130 06:13:19.911709 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:19 crc kubenswrapper[4931]: I0130 06:13:19.930308 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6226b"] Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.002640 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-utilities\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.002710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4gh\" (UniqueName: \"kubernetes.io/projected/9820847a-122c-4574-ae00-c9fa43dbcb5c-kube-api-access-mx4gh\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.002774 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-catalog-content\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.104393 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx4gh\" (UniqueName: \"kubernetes.io/projected/9820847a-122c-4574-ae00-c9fa43dbcb5c-kube-api-access-mx4gh\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.104559 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-catalog-content\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.104653 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-utilities\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.105231 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-catalog-content\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.105249 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-utilities\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.132776 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx4gh\" (UniqueName: \"kubernetes.io/projected/9820847a-122c-4574-ae00-c9fa43dbcb5c-kube-api-access-mx4gh\") pod \"redhat-marketplace-6226b\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.248320 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:20 crc kubenswrapper[4931]: I0130 06:13:20.738782 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6226b"] Jan 30 06:13:20 crc kubenswrapper[4931]: W0130 06:13:20.754043 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9820847a_122c_4574_ae00_c9fa43dbcb5c.slice/crio-87b14e16ffd43648ee6b21151d7310f6fa08bdb830abcee645cb7d85c984b865 WatchSource:0}: Error finding container 87b14e16ffd43648ee6b21151d7310f6fa08bdb830abcee645cb7d85c984b865: Status 404 returned error can't find the container with id 87b14e16ffd43648ee6b21151d7310f6fa08bdb830abcee645cb7d85c984b865 Jan 30 06:13:21 crc kubenswrapper[4931]: I0130 06:13:21.757518 4931 generic.go:334] "Generic (PLEG): container finished" podID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerID="9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5" exitCode=0 Jan 30 06:13:21 crc kubenswrapper[4931]: I0130 06:13:21.757684 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6226b" event={"ID":"9820847a-122c-4574-ae00-c9fa43dbcb5c","Type":"ContainerDied","Data":"9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5"} Jan 30 06:13:21 crc kubenswrapper[4931]: I0130 06:13:21.758158 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6226b" event={"ID":"9820847a-122c-4574-ae00-c9fa43dbcb5c","Type":"ContainerStarted","Data":"87b14e16ffd43648ee6b21151d7310f6fa08bdb830abcee645cb7d85c984b865"} Jan 30 06:13:21 crc kubenswrapper[4931]: I0130 06:13:21.762337 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:13:22 crc kubenswrapper[4931]: I0130 06:13:22.764767 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6226b" event={"ID":"9820847a-122c-4574-ae00-c9fa43dbcb5c","Type":"ContainerStarted","Data":"4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f"} Jan 30 06:13:23 crc kubenswrapper[4931]: I0130 06:13:23.776770 4931 generic.go:334] "Generic (PLEG): container finished" podID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerID="4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f" exitCode=0 Jan 30 06:13:23 crc kubenswrapper[4931]: I0130 06:13:23.776937 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6226b" event={"ID":"9820847a-122c-4574-ae00-c9fa43dbcb5c","Type":"ContainerDied","Data":"4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f"} Jan 30 06:13:24 crc kubenswrapper[4931]: I0130 06:13:24.788697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6226b" event={"ID":"9820847a-122c-4574-ae00-c9fa43dbcb5c","Type":"ContainerStarted","Data":"12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b"} Jan 30 06:13:24 crc kubenswrapper[4931]: I0130 06:13:24.826777 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6226b" podStartSLOduration=3.4010951289999998 podStartE2EDuration="5.826753856s" podCreationTimestamp="2026-01-30 06:13:19 +0000 UTC" firstStartedPulling="2026-01-30 06:13:21.761745868 +0000 UTC m=+3937.131656165" lastFinishedPulling="2026-01-30 06:13:24.187404595 +0000 UTC m=+3939.557314892" observedRunningTime="2026-01-30 06:13:24.817484677 +0000 UTC m=+3940.187394944" watchObservedRunningTime="2026-01-30 06:13:24.826753856 +0000 UTC m=+3940.196664133" Jan 30 06:13:30 crc kubenswrapper[4931]: I0130 06:13:30.249367 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:30 crc kubenswrapper[4931]: I0130 06:13:30.250475 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:30 crc kubenswrapper[4931]: I0130 06:13:30.324332 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:31 crc kubenswrapper[4931]: I0130 06:13:31.264360 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:33 crc kubenswrapper[4931]: I0130 06:13:33.889201 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6226b"] Jan 30 06:13:33 crc kubenswrapper[4931]: I0130 06:13:33.889599 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6226b" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="registry-server" containerID="cri-o://12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b" gracePeriod=2 Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.506828 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.649174 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx4gh\" (UniqueName: \"kubernetes.io/projected/9820847a-122c-4574-ae00-c9fa43dbcb5c-kube-api-access-mx4gh\") pod \"9820847a-122c-4574-ae00-c9fa43dbcb5c\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.649274 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-catalog-content\") pod \"9820847a-122c-4574-ae00-c9fa43dbcb5c\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.649455 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-utilities\") pod \"9820847a-122c-4574-ae00-c9fa43dbcb5c\" (UID: \"9820847a-122c-4574-ae00-c9fa43dbcb5c\") " Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.651235 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-utilities" (OuterVolumeSpecName: "utilities") pod "9820847a-122c-4574-ae00-c9fa43dbcb5c" (UID: "9820847a-122c-4574-ae00-c9fa43dbcb5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.659467 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9820847a-122c-4574-ae00-c9fa43dbcb5c-kube-api-access-mx4gh" (OuterVolumeSpecName: "kube-api-access-mx4gh") pod "9820847a-122c-4574-ae00-c9fa43dbcb5c" (UID: "9820847a-122c-4574-ae00-c9fa43dbcb5c"). InnerVolumeSpecName "kube-api-access-mx4gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.685337 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9820847a-122c-4574-ae00-c9fa43dbcb5c" (UID: "9820847a-122c-4574-ae00-c9fa43dbcb5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.751713 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.751766 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx4gh\" (UniqueName: \"kubernetes.io/projected/9820847a-122c-4574-ae00-c9fa43dbcb5c-kube-api-access-mx4gh\") on node \"crc\" DevicePath \"\"" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.751787 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9820847a-122c-4574-ae00-c9fa43dbcb5c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.881077 4931 generic.go:334] "Generic (PLEG): container finished" podID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerID="12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b" exitCode=0 Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.881135 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6226b" event={"ID":"9820847a-122c-4574-ae00-c9fa43dbcb5c","Type":"ContainerDied","Data":"12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b"} Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.881149 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6226b" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.881186 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6226b" event={"ID":"9820847a-122c-4574-ae00-c9fa43dbcb5c","Type":"ContainerDied","Data":"87b14e16ffd43648ee6b21151d7310f6fa08bdb830abcee645cb7d85c984b865"} Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.881222 4931 scope.go:117] "RemoveContainer" containerID="12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.922899 4931 scope.go:117] "RemoveContainer" containerID="4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.933143 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6226b"] Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.947729 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6226b"] Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.966131 4931 scope.go:117] "RemoveContainer" containerID="9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5" Jan 30 06:13:34 crc kubenswrapper[4931]: I0130 06:13:34.999293 4931 scope.go:117] "RemoveContainer" containerID="12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b" Jan 30 06:13:34 crc kubenswrapper[4931]: E0130 06:13:34.999823 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b\": container with ID starting with 12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b not found: ID does not exist" containerID="12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b" Jan 30 06:13:35 crc kubenswrapper[4931]: I0130 06:13:34.999885 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b"} err="failed to get container status \"12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b\": rpc error: code = NotFound desc = could not find container \"12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b\": container with ID starting with 12d931cfeb4103bb2568906ed16443a1e9d5632d761dc2358c0e1ef823d15d3b not found: ID does not exist" Jan 30 06:13:35 crc kubenswrapper[4931]: I0130 06:13:34.999912 4931 scope.go:117] "RemoveContainer" containerID="4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f" Jan 30 06:13:35 crc kubenswrapper[4931]: E0130 06:13:35.000410 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f\": container with ID starting with 4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f not found: ID does not exist" containerID="4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f" Jan 30 06:13:35 crc kubenswrapper[4931]: I0130 06:13:35.000461 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f"} err="failed to get container status \"4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f\": rpc error: code = NotFound desc = could not find container \"4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f\": container with ID starting with 4709a186bdcfef0e292891573314ae8946afbacfb45e50eef069867eaf13141f not found: ID does not exist" Jan 30 06:13:35 crc kubenswrapper[4931]: I0130 06:13:35.000482 4931 scope.go:117] "RemoveContainer" containerID="9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5" Jan 30 06:13:35 crc kubenswrapper[4931]: E0130 06:13:35.000895 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5\": container with ID starting with 9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5 not found: ID does not exist" containerID="9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5" Jan 30 06:13:35 crc kubenswrapper[4931]: I0130 06:13:35.000924 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5"} err="failed to get container status \"9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5\": rpc error: code = NotFound desc = could not find container \"9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5\": container with ID starting with 9717ea1e32889e7df9d8739d7f467a863036da2019d6afd7b48486952c11a7d5 not found: ID does not exist" Jan 30 06:13:35 crc kubenswrapper[4931]: I0130 06:13:35.437850 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" path="/var/lib/kubelet/pods/9820847a-122c-4574-ae00-c9fa43dbcb5c/volumes" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.576083 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7pttl"] Jan 30 06:14:42 crc kubenswrapper[4931]: E0130 06:14:42.577502 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="extract-utilities" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.577536 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="extract-utilities" Jan 30 06:14:42 crc kubenswrapper[4931]: E0130 06:14:42.577578 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="extract-content" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.577594 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="extract-content" Jan 30 06:14:42 crc kubenswrapper[4931]: E0130 06:14:42.577650 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="registry-server" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.577668 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="registry-server" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.578012 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9820847a-122c-4574-ae00-c9fa43dbcb5c" containerName="registry-server" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.580593 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.592665 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pttl"] Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.696999 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqlg\" (UniqueName: \"kubernetes.io/projected/fe96f298-ec26-408d-9726-27cbd48f1000-kube-api-access-fsqlg\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.697175 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-catalog-content\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.697526 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-utilities\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.799375 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-catalog-content\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.799542 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-utilities\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.799620 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsqlg\" (UniqueName: \"kubernetes.io/projected/fe96f298-ec26-408d-9726-27cbd48f1000-kube-api-access-fsqlg\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.800219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-catalog-content\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.800296 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-utilities\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.823538 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsqlg\" (UniqueName: \"kubernetes.io/projected/fe96f298-ec26-408d-9726-27cbd48f1000-kube-api-access-fsqlg\") pod \"certified-operators-7pttl\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:42 crc kubenswrapper[4931]: I0130 06:14:42.919030 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:43 crc kubenswrapper[4931]: I0130 06:14:43.159117 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pttl"] Jan 30 06:14:43 crc kubenswrapper[4931]: I0130 06:14:43.528931 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe96f298-ec26-408d-9726-27cbd48f1000" containerID="b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688" exitCode=0 Jan 30 06:14:43 crc kubenswrapper[4931]: I0130 06:14:43.529018 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pttl" event={"ID":"fe96f298-ec26-408d-9726-27cbd48f1000","Type":"ContainerDied","Data":"b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688"} Jan 30 06:14:43 crc kubenswrapper[4931]: I0130 06:14:43.529337 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pttl" event={"ID":"fe96f298-ec26-408d-9726-27cbd48f1000","Type":"ContainerStarted","Data":"0008547ef7be2a21ff5cf547b5d7fc2f4b3459ce1ea50374e2ee436469690c2e"} Jan 30 06:14:44 crc kubenswrapper[4931]: I0130 06:14:44.544994 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pttl" event={"ID":"fe96f298-ec26-408d-9726-27cbd48f1000","Type":"ContainerStarted","Data":"20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9"} Jan 30 06:14:45 crc kubenswrapper[4931]: I0130 06:14:45.556249 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe96f298-ec26-408d-9726-27cbd48f1000" containerID="20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9" exitCode=0 Jan 30 06:14:45 crc kubenswrapper[4931]: I0130 06:14:45.556306 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pttl" event={"ID":"fe96f298-ec26-408d-9726-27cbd48f1000","Type":"ContainerDied","Data":"20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9"} Jan 30 06:14:46 crc kubenswrapper[4931]: I0130 06:14:46.569972 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pttl" event={"ID":"fe96f298-ec26-408d-9726-27cbd48f1000","Type":"ContainerStarted","Data":"681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca"} Jan 30 06:14:46 crc kubenswrapper[4931]: I0130 06:14:46.600468 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7pttl" podStartSLOduration=2.1615184100000002 podStartE2EDuration="4.600402478s" podCreationTimestamp="2026-01-30 06:14:42 +0000 UTC" firstStartedPulling="2026-01-30 06:14:43.530572245 +0000 UTC m=+4018.900482532" lastFinishedPulling="2026-01-30 06:14:45.969456303 +0000 UTC m=+4021.339366600" observedRunningTime="2026-01-30 06:14:46.596993093 +0000 UTC m=+4021.966903410" watchObservedRunningTime="2026-01-30 06:14:46.600402478 +0000 UTC m=+4021.970312785" Jan 30 06:14:52 crc kubenswrapper[4931]: I0130 06:14:52.919998 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:52 crc kubenswrapper[4931]: I0130 06:14:52.921004 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:53 crc kubenswrapper[4931]: I0130 06:14:53.001361 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:53 crc kubenswrapper[4931]: I0130 06:14:53.723988 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:53 crc kubenswrapper[4931]: I0130 06:14:53.791453 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pttl"] Jan 30 06:14:55 crc kubenswrapper[4931]: I0130 06:14:55.650288 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7pttl" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="registry-server" containerID="cri-o://681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca" gracePeriod=2 Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.177265 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.343069 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsqlg\" (UniqueName: \"kubernetes.io/projected/fe96f298-ec26-408d-9726-27cbd48f1000-kube-api-access-fsqlg\") pod \"fe96f298-ec26-408d-9726-27cbd48f1000\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.343225 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-utilities\") pod \"fe96f298-ec26-408d-9726-27cbd48f1000\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.343269 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-catalog-content\") pod \"fe96f298-ec26-408d-9726-27cbd48f1000\" (UID: \"fe96f298-ec26-408d-9726-27cbd48f1000\") " Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.344697 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-utilities" (OuterVolumeSpecName: "utilities") pod "fe96f298-ec26-408d-9726-27cbd48f1000" (UID: "fe96f298-ec26-408d-9726-27cbd48f1000"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.349503 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe96f298-ec26-408d-9726-27cbd48f1000-kube-api-access-fsqlg" (OuterVolumeSpecName: "kube-api-access-fsqlg") pod "fe96f298-ec26-408d-9726-27cbd48f1000" (UID: "fe96f298-ec26-408d-9726-27cbd48f1000"). InnerVolumeSpecName "kube-api-access-fsqlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.404532 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe96f298-ec26-408d-9726-27cbd48f1000" (UID: "fe96f298-ec26-408d-9726-27cbd48f1000"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.444973 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsqlg\" (UniqueName: \"kubernetes.io/projected/fe96f298-ec26-408d-9726-27cbd48f1000-kube-api-access-fsqlg\") on node \"crc\" DevicePath \"\"" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.445022 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.445041 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe96f298-ec26-408d-9726-27cbd48f1000-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.667405 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe96f298-ec26-408d-9726-27cbd48f1000" containerID="681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca" exitCode=0 Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.667522 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pttl" event={"ID":"fe96f298-ec26-408d-9726-27cbd48f1000","Type":"ContainerDied","Data":"681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca"} Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.667573 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pttl" event={"ID":"fe96f298-ec26-408d-9726-27cbd48f1000","Type":"ContainerDied","Data":"0008547ef7be2a21ff5cf547b5d7fc2f4b3459ce1ea50374e2ee436469690c2e"} Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.667578 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pttl" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.667612 4931 scope.go:117] "RemoveContainer" containerID="681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.699232 4931 scope.go:117] "RemoveContainer" containerID="20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.724116 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pttl"] Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.742570 4931 scope.go:117] "RemoveContainer" containerID="b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.745179 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7pttl"] Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.787482 4931 scope.go:117] "RemoveContainer" containerID="681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca" Jan 30 06:14:56 crc kubenswrapper[4931]: E0130 06:14:56.788052 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca\": container with ID starting with 681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca not found: ID does not exist" containerID="681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.788128 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca"} err="failed to get container status \"681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca\": rpc error: code = NotFound desc = could not find container \"681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca\": container with ID starting with 681697db3d7f21be1eb126ee925add4f086d68ee090000e0e969acd0abaa7fca not found: ID does not exist" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.788175 4931 scope.go:117] "RemoveContainer" containerID="20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9" Jan 30 06:14:56 crc kubenswrapper[4931]: E0130 06:14:56.788657 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9\": container with ID starting with 20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9 not found: ID does not exist" containerID="20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.788721 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9"} err="failed to get container status \"20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9\": rpc error: code = NotFound desc = could not find container \"20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9\": container with ID starting with 20fa62f0325e13f9a0a0df805e269ffeddb7f5f6ec884c6688eb9af3d56750c9 not found: ID does not exist" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.788750 4931 scope.go:117] "RemoveContainer" containerID="b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688" Jan 30 06:14:56 crc kubenswrapper[4931]: E0130 06:14:56.789076 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688\": container with ID starting with b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688 not found: ID does not exist" containerID="b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688" Jan 30 06:14:56 crc kubenswrapper[4931]: I0130 06:14:56.789125 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688"} err="failed to get container status \"b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688\": rpc error: code = NotFound desc = could not find container \"b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688\": container with ID starting with b0c0c7ee21138cd6602968a5f94a11879dfd4c2478dbe3f4674a5869822eb688 not found: ID does not exist" Jan 30 06:14:57 crc kubenswrapper[4931]: I0130 06:14:57.363862 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:14:57 crc kubenswrapper[4931]: I0130 06:14:57.364461 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:14:57 crc kubenswrapper[4931]: I0130 06:14:57.439829 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" path="/var/lib/kubelet/pods/fe96f298-ec26-408d-9726-27cbd48f1000/volumes" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.237883 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8"] Jan 30 06:15:00 crc kubenswrapper[4931]: E0130 06:15:00.238249 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="extract-content" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.238265 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="extract-content" Jan 30 06:15:00 crc kubenswrapper[4931]: E0130 06:15:00.238283 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="registry-server" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.238291 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="registry-server" Jan 30 06:15:00 crc kubenswrapper[4931]: E0130 06:15:00.238301 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="extract-utilities" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.238309 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="extract-utilities" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.238503 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe96f298-ec26-408d-9726-27cbd48f1000" containerName="registry-server" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.239032 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.244743 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.247376 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.269374 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8"] Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.324242 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-config-volume\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.324350 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-secret-volume\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.324616 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdt9v\" (UniqueName: \"kubernetes.io/projected/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-kube-api-access-mdt9v\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.425716 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-secret-volume\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.425900 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdt9v\" (UniqueName: \"kubernetes.io/projected/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-kube-api-access-mdt9v\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.426026 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-config-volume\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.427602 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-config-volume\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.440731 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-secret-volume\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.447775 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdt9v\" (UniqueName: \"kubernetes.io/projected/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-kube-api-access-mdt9v\") pod \"collect-profiles-29495895-64nq8\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.562827 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:00 crc kubenswrapper[4931]: I0130 06:15:00.854713 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8"] Jan 30 06:15:01 crc kubenswrapper[4931]: I0130 06:15:01.713525 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" event={"ID":"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e","Type":"ContainerStarted","Data":"d9b4e7ab55cdff59c11d247a35f15b900c5c2d23ac2be2cf5caa19378305d01e"} Jan 30 06:15:01 crc kubenswrapper[4931]: I0130 06:15:01.715092 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" event={"ID":"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e","Type":"ContainerStarted","Data":"c74a6b786346dd0ffbe00b1e070896f96528ce8f3c40277f6789ceba0bd660e2"} Jan 30 06:15:01 crc kubenswrapper[4931]: I0130 06:15:01.736568 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" podStartSLOduration=1.736544033 podStartE2EDuration="1.736544033s" podCreationTimestamp="2026-01-30 06:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:15:01.735881105 +0000 UTC m=+4037.105791372" watchObservedRunningTime="2026-01-30 06:15:01.736544033 +0000 UTC m=+4037.106454310" Jan 30 06:15:02 crc kubenswrapper[4931]: I0130 06:15:02.725890 4931 generic.go:334] "Generic (PLEG): container finished" podID="f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" containerID="d9b4e7ab55cdff59c11d247a35f15b900c5c2d23ac2be2cf5caa19378305d01e" exitCode=0 Jan 30 06:15:02 crc kubenswrapper[4931]: I0130 06:15:02.725966 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" event={"ID":"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e","Type":"ContainerDied","Data":"d9b4e7ab55cdff59c11d247a35f15b900c5c2d23ac2be2cf5caa19378305d01e"} Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.178190 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.284455 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-config-volume\") pod \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.284527 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdt9v\" (UniqueName: \"kubernetes.io/projected/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-kube-api-access-mdt9v\") pod \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.284659 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-secret-volume\") pod \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\" (UID: \"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e\") " Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.286170 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" (UID: "f7d1c4a0-d36c-47d4-b603-3320c87f7c8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.302343 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-kube-api-access-mdt9v" (OuterVolumeSpecName: "kube-api-access-mdt9v") pod "f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" (UID: "f7d1c4a0-d36c-47d4-b603-3320c87f7c8e"). InnerVolumeSpecName "kube-api-access-mdt9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.303148 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" (UID: "f7d1c4a0-d36c-47d4-b603-3320c87f7c8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.386755 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.386807 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdt9v\" (UniqueName: \"kubernetes.io/projected/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-kube-api-access-mdt9v\") on node \"crc\" DevicePath \"\"" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.386830 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.747286 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" event={"ID":"f7d1c4a0-d36c-47d4-b603-3320c87f7c8e","Type":"ContainerDied","Data":"c74a6b786346dd0ffbe00b1e070896f96528ce8f3c40277f6789ceba0bd660e2"} Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.747834 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74a6b786346dd0ffbe00b1e070896f96528ce8f3c40277f6789ceba0bd660e2" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.747405 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8" Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.848297 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv"] Jan 30 06:15:04 crc kubenswrapper[4931]: I0130 06:15:04.858277 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495850-8t6xv"] Jan 30 06:15:05 crc kubenswrapper[4931]: I0130 06:15:05.438788 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a19500-eb44-455f-a8b7-7ee5375b87ef" path="/var/lib/kubelet/pods/09a19500-eb44-455f-a8b7-7ee5375b87ef/volumes" Jan 30 06:15:27 crc kubenswrapper[4931]: I0130 06:15:27.363735 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:15:27 crc kubenswrapper[4931]: I0130 06:15:27.364620 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:15:53 crc kubenswrapper[4931]: I0130 06:15:53.154250 4931 scope.go:117] "RemoveContainer" containerID="f1cc6685442d84c78caf7ee74e69ba6f0a12fa18a641f9f2d8eb2d03f2ae6e04" Jan 30 06:15:57 crc kubenswrapper[4931]: I0130 06:15:57.364320 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:15:57 crc kubenswrapper[4931]: I0130 06:15:57.365115 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:15:57 crc kubenswrapper[4931]: I0130 06:15:57.365189 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:15:57 crc kubenswrapper[4931]: I0130 06:15:57.366226 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3235141ead611a0f7e3893625153168342d0a69d19fff5669f6dde20d2fc1d8b"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:15:57 crc kubenswrapper[4931]: I0130 06:15:57.366338 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://3235141ead611a0f7e3893625153168342d0a69d19fff5669f6dde20d2fc1d8b" gracePeriod=600 Jan 30 06:15:58 crc kubenswrapper[4931]: I0130 06:15:58.294643 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="3235141ead611a0f7e3893625153168342d0a69d19fff5669f6dde20d2fc1d8b" exitCode=0 Jan 30 06:15:58 crc kubenswrapper[4931]: I0130 06:15:58.294747 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"3235141ead611a0f7e3893625153168342d0a69d19fff5669f6dde20d2fc1d8b"} Jan 30 06:15:58 crc kubenswrapper[4931]: I0130 06:15:58.295343 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229"} Jan 30 06:15:58 crc kubenswrapper[4931]: I0130 06:15:58.295381 4931 scope.go:117] "RemoveContainer" containerID="761577d14354de60e9f89a79784490438e71d9747d27a00c144870c39871d54b" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.098730 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lw8bs"] Jan 30 06:16:45 crc kubenswrapper[4931]: E0130 06:16:45.103033 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" containerName="collect-profiles" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.103626 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" containerName="collect-profiles" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.104329 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" containerName="collect-profiles" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.110540 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.122569 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lw8bs"] Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.280085 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9591e541-c3a7-4565-a829-b3da700f84ff-utilities\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.280579 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frp8\" (UniqueName: \"kubernetes.io/projected/9591e541-c3a7-4565-a829-b3da700f84ff-kube-api-access-4frp8\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.280806 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9591e541-c3a7-4565-a829-b3da700f84ff-catalog-content\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.381622 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9591e541-c3a7-4565-a829-b3da700f84ff-catalog-content\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.381711 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9591e541-c3a7-4565-a829-b3da700f84ff-utilities\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.381840 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frp8\" (UniqueName: \"kubernetes.io/projected/9591e541-c3a7-4565-a829-b3da700f84ff-kube-api-access-4frp8\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.382904 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9591e541-c3a7-4565-a829-b3da700f84ff-catalog-content\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.382974 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9591e541-c3a7-4565-a829-b3da700f84ff-utilities\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.401939 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frp8\" (UniqueName: \"kubernetes.io/projected/9591e541-c3a7-4565-a829-b3da700f84ff-kube-api-access-4frp8\") pod \"community-operators-lw8bs\" (UID: \"9591e541-c3a7-4565-a829-b3da700f84ff\") " pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.437852 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:45 crc kubenswrapper[4931]: I0130 06:16:45.935881 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lw8bs"] Jan 30 06:16:46 crc kubenswrapper[4931]: I0130 06:16:46.740331 4931 generic.go:334] "Generic (PLEG): container finished" podID="9591e541-c3a7-4565-a829-b3da700f84ff" containerID="98a178c8781f2c5745176d2c050503a39574e2f26cb3321c038637ccc4b2d914" exitCode=0 Jan 30 06:16:46 crc kubenswrapper[4931]: I0130 06:16:46.740407 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw8bs" event={"ID":"9591e541-c3a7-4565-a829-b3da700f84ff","Type":"ContainerDied","Data":"98a178c8781f2c5745176d2c050503a39574e2f26cb3321c038637ccc4b2d914"} Jan 30 06:16:46 crc kubenswrapper[4931]: I0130 06:16:46.741560 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw8bs" event={"ID":"9591e541-c3a7-4565-a829-b3da700f84ff","Type":"ContainerStarted","Data":"3621eafb4fe816d5783cf2a52ae599f2c303610ec84bd79ebd10aa3a4f24be32"} Jan 30 06:16:50 crc kubenswrapper[4931]: I0130 06:16:50.798849 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw8bs" event={"ID":"9591e541-c3a7-4565-a829-b3da700f84ff","Type":"ContainerStarted","Data":"b0d974a3f53b6a76e998aaf425b1eeee40fef7215c292540c1180f9ff99d9b4b"} Jan 30 06:16:51 crc kubenswrapper[4931]: I0130 06:16:51.812269 4931 generic.go:334] "Generic (PLEG): container finished" podID="9591e541-c3a7-4565-a829-b3da700f84ff" containerID="b0d974a3f53b6a76e998aaf425b1eeee40fef7215c292540c1180f9ff99d9b4b" exitCode=0 Jan 30 06:16:51 crc kubenswrapper[4931]: I0130 06:16:51.812377 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw8bs" event={"ID":"9591e541-c3a7-4565-a829-b3da700f84ff","Type":"ContainerDied","Data":"b0d974a3f53b6a76e998aaf425b1eeee40fef7215c292540c1180f9ff99d9b4b"} Jan 30 06:16:52 crc kubenswrapper[4931]: I0130 06:16:52.824086 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw8bs" event={"ID":"9591e541-c3a7-4565-a829-b3da700f84ff","Type":"ContainerStarted","Data":"778afc5ebb1e169282450845aacae8ec9fe089e1af268dbbc12463e6f9e10e7e"} Jan 30 06:16:52 crc kubenswrapper[4931]: I0130 06:16:52.858554 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lw8bs" podStartSLOduration=2.300149483 podStartE2EDuration="7.858528437s" podCreationTimestamp="2026-01-30 06:16:45 +0000 UTC" firstStartedPulling="2026-01-30 06:16:46.742177982 +0000 UTC m=+4142.112088279" lastFinishedPulling="2026-01-30 06:16:52.300556936 +0000 UTC m=+4147.670467233" observedRunningTime="2026-01-30 06:16:52.852690995 +0000 UTC m=+4148.222601262" watchObservedRunningTime="2026-01-30 06:16:52.858528437 +0000 UTC m=+4148.228438734" Jan 30 06:16:55 crc kubenswrapper[4931]: I0130 06:16:55.438820 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:55 crc kubenswrapper[4931]: I0130 06:16:55.439662 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:16:55 crc kubenswrapper[4931]: I0130 06:16:55.519624 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:17:05 crc kubenswrapper[4931]: I0130 06:17:05.509317 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lw8bs" Jan 30 06:17:05 crc kubenswrapper[4931]: I0130 06:17:05.613578 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lw8bs"] Jan 30 06:17:05 crc kubenswrapper[4931]: I0130 06:17:05.678160 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kf2zk"] Jan 30 06:17:05 crc kubenswrapper[4931]: I0130 06:17:05.678558 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kf2zk" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="registry-server" containerID="cri-o://f2435e1bbad2c342892feef9048c365b9106973a68f19a101621695990d1928d" gracePeriod=2 Jan 30 06:17:05 crc kubenswrapper[4931]: I0130 06:17:05.926440 4931 generic.go:334] "Generic (PLEG): container finished" podID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerID="f2435e1bbad2c342892feef9048c365b9106973a68f19a101621695990d1928d" exitCode=0 Jan 30 06:17:05 crc kubenswrapper[4931]: I0130 06:17:05.926520 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2zk" event={"ID":"4e7fc26b-b0a0-4ed3-973a-d14f3118f495","Type":"ContainerDied","Data":"f2435e1bbad2c342892feef9048c365b9106973a68f19a101621695990d1928d"} Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.066177 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.266251 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-utilities\") pod \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.266364 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kl88\" (UniqueName: \"kubernetes.io/projected/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-kube-api-access-8kl88\") pod \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.266467 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-catalog-content\") pod \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\" (UID: \"4e7fc26b-b0a0-4ed3-973a-d14f3118f495\") " Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.274104 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-utilities" (OuterVolumeSpecName: "utilities") pod "4e7fc26b-b0a0-4ed3-973a-d14f3118f495" (UID: "4e7fc26b-b0a0-4ed3-973a-d14f3118f495"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.279617 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-kube-api-access-8kl88" (OuterVolumeSpecName: "kube-api-access-8kl88") pod "4e7fc26b-b0a0-4ed3-973a-d14f3118f495" (UID: "4e7fc26b-b0a0-4ed3-973a-d14f3118f495"). InnerVolumeSpecName "kube-api-access-8kl88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.317230 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e7fc26b-b0a0-4ed3-973a-d14f3118f495" (UID: "4e7fc26b-b0a0-4ed3-973a-d14f3118f495"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.368379 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kl88\" (UniqueName: \"kubernetes.io/projected/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-kube-api-access-8kl88\") on node \"crc\" DevicePath \"\"" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.368433 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.368442 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7fc26b-b0a0-4ed3-973a-d14f3118f495-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.942273 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kf2zk" event={"ID":"4e7fc26b-b0a0-4ed3-973a-d14f3118f495","Type":"ContainerDied","Data":"1886542e78e49a17a6d2a06541b8bb125f47e58db49595a42b19bb6b2b8126f7"} Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.942361 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kf2zk" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.942767 4931 scope.go:117] "RemoveContainer" containerID="f2435e1bbad2c342892feef9048c365b9106973a68f19a101621695990d1928d" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.967077 4931 scope.go:117] "RemoveContainer" containerID="3075fb652ae28ffe627ee2fb3e561168de8141b7ab9f92e2f6fdc3f70ab564db" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.981306 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kf2zk"] Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.994523 4931 scope.go:117] "RemoveContainer" containerID="354aad0cad4b5a2844a0aaa97a5d9c4e75d0d2f7996caccea5b63021c15588c0" Jan 30 06:17:06 crc kubenswrapper[4931]: I0130 06:17:06.995589 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kf2zk"] Jan 30 06:17:07 crc kubenswrapper[4931]: I0130 06:17:07.431339 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" path="/var/lib/kubelet/pods/4e7fc26b-b0a0-4ed3-973a-d14f3118f495/volumes" Jan 30 06:17:57 crc kubenswrapper[4931]: I0130 06:17:57.363637 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:17:57 crc kubenswrapper[4931]: I0130 06:17:57.364778 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:18:27 crc kubenswrapper[4931]: I0130 06:18:27.363589 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:18:27 crc kubenswrapper[4931]: I0130 06:18:27.364179 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:18:57 crc kubenswrapper[4931]: I0130 06:18:57.363722 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:18:57 crc kubenswrapper[4931]: I0130 06:18:57.364348 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:18:57 crc kubenswrapper[4931]: I0130 06:18:57.364417 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:18:57 crc kubenswrapper[4931]: I0130 06:18:57.365236 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:18:57 crc kubenswrapper[4931]: I0130 06:18:57.365335 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" gracePeriod=600 Jan 30 06:18:57 crc kubenswrapper[4931]: E0130 06:18:57.507996 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:18:58 crc kubenswrapper[4931]: I0130 06:18:58.037939 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" exitCode=0 Jan 30 06:18:58 crc kubenswrapper[4931]: I0130 06:18:58.037968 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229"} Jan 30 06:18:58 crc kubenswrapper[4931]: I0130 06:18:58.038637 4931 scope.go:117] "RemoveContainer" containerID="3235141ead611a0f7e3893625153168342d0a69d19fff5669f6dde20d2fc1d8b" Jan 30 06:18:58 crc kubenswrapper[4931]: I0130 06:18:58.039339 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:18:58 crc kubenswrapper[4931]: E0130 06:18:58.039778 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:19:09 crc kubenswrapper[4931]: I0130 06:19:09.422455 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:19:09 crc kubenswrapper[4931]: E0130 06:19:09.423468 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:19:21 crc kubenswrapper[4931]: I0130 06:19:21.422680 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:19:21 crc kubenswrapper[4931]: E0130 06:19:21.423492 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.484242 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvwtk"] Jan 30 06:19:30 crc kubenswrapper[4931]: E0130 06:19:30.484995 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="registry-server" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.485007 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="registry-server" Jan 30 06:19:30 crc kubenswrapper[4931]: E0130 06:19:30.485025 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="extract-content" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.485031 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="extract-content" Jan 30 06:19:30 crc kubenswrapper[4931]: E0130 06:19:30.485047 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="extract-utilities" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.485053 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="extract-utilities" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.485179 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7fc26b-b0a0-4ed3-973a-d14f3118f495" containerName="registry-server" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.486020 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.499206 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvwtk"] Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.576244 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-utilities\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.576302 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfb57\" (UniqueName: \"kubernetes.io/projected/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-kube-api-access-gfb57\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.576339 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-catalog-content\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.677606 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-utilities\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.677679 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfb57\" (UniqueName: \"kubernetes.io/projected/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-kube-api-access-gfb57\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.677762 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-catalog-content\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.678551 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-catalog-content\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.678572 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-utilities\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.696853 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfb57\" (UniqueName: \"kubernetes.io/projected/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-kube-api-access-gfb57\") pod \"redhat-operators-pvwtk\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:30 crc kubenswrapper[4931]: I0130 06:19:30.804979 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:31 crc kubenswrapper[4931]: I0130 06:19:31.310637 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvwtk"] Jan 30 06:19:31 crc kubenswrapper[4931]: I0130 06:19:31.345015 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvwtk" event={"ID":"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4","Type":"ContainerStarted","Data":"f397bc428f5dd22c0df0143328bdde718f24dac9ab63094f67925c4d228021cd"} Jan 30 06:19:32 crc kubenswrapper[4931]: I0130 06:19:32.352789 4931 generic.go:334] "Generic (PLEG): container finished" podID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerID="b6286ad94de30681c2a835c7e301c6d1df98a96018bfe2306f9440b968c77016" exitCode=0 Jan 30 06:19:32 crc kubenswrapper[4931]: I0130 06:19:32.352883 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvwtk" event={"ID":"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4","Type":"ContainerDied","Data":"b6286ad94de30681c2a835c7e301c6d1df98a96018bfe2306f9440b968c77016"} Jan 30 06:19:32 crc kubenswrapper[4931]: I0130 06:19:32.356170 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:19:33 crc kubenswrapper[4931]: I0130 06:19:33.366696 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvwtk" event={"ID":"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4","Type":"ContainerStarted","Data":"f8f5eb6693ba1ad57ad1b7da16804037a17ede8e8520ed704cf51910ecbe0943"} Jan 30 06:19:34 crc kubenswrapper[4931]: I0130 06:19:34.378501 4931 generic.go:334] "Generic (PLEG): container finished" podID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerID="f8f5eb6693ba1ad57ad1b7da16804037a17ede8e8520ed704cf51910ecbe0943" exitCode=0 Jan 30 06:19:34 crc kubenswrapper[4931]: I0130 06:19:34.378555 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvwtk" event={"ID":"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4","Type":"ContainerDied","Data":"f8f5eb6693ba1ad57ad1b7da16804037a17ede8e8520ed704cf51910ecbe0943"} Jan 30 06:19:35 crc kubenswrapper[4931]: I0130 06:19:35.410468 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvwtk" event={"ID":"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4","Type":"ContainerStarted","Data":"8a43ab2a73d1194d386a7a60f91b57edea527c87bc38e74d204537d7f9f1a4b8"} Jan 30 06:19:35 crc kubenswrapper[4931]: I0130 06:19:35.441927 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvwtk" podStartSLOduration=3.022135117 podStartE2EDuration="5.44190379s" podCreationTimestamp="2026-01-30 06:19:30 +0000 UTC" firstStartedPulling="2026-01-30 06:19:32.355949548 +0000 UTC m=+4307.725859805" lastFinishedPulling="2026-01-30 06:19:34.775718181 +0000 UTC m=+4310.145628478" observedRunningTime="2026-01-30 06:19:35.441314024 +0000 UTC m=+4310.811224321" watchObservedRunningTime="2026-01-30 06:19:35.44190379 +0000 UTC m=+4310.811814087" Jan 30 06:19:36 crc kubenswrapper[4931]: I0130 06:19:36.421760 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:19:36 crc kubenswrapper[4931]: E0130 06:19:36.422054 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:19:40 crc kubenswrapper[4931]: I0130 06:19:40.805509 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:40 crc kubenswrapper[4931]: I0130 06:19:40.806236 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:41 crc kubenswrapper[4931]: I0130 06:19:41.877468 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pvwtk" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="registry-server" probeResult="failure" output=< Jan 30 06:19:41 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 06:19:41 crc kubenswrapper[4931]: > Jan 30 06:19:50 crc kubenswrapper[4931]: I0130 06:19:50.882728 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:50 crc kubenswrapper[4931]: I0130 06:19:50.962794 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:51 crc kubenswrapper[4931]: I0130 06:19:51.139591 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvwtk"] Jan 30 06:19:51 crc kubenswrapper[4931]: I0130 06:19:51.422110 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:19:51 crc kubenswrapper[4931]: E0130 06:19:51.422485 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:19:52 crc kubenswrapper[4931]: I0130 06:19:52.608077 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pvwtk" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="registry-server" containerID="cri-o://8a43ab2a73d1194d386a7a60f91b57edea527c87bc38e74d204537d7f9f1a4b8" gracePeriod=2 Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.643020 4931 generic.go:334] "Generic (PLEG): container finished" podID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerID="8a43ab2a73d1194d386a7a60f91b57edea527c87bc38e74d204537d7f9f1a4b8" exitCode=0 Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.643114 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvwtk" event={"ID":"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4","Type":"ContainerDied","Data":"8a43ab2a73d1194d386a7a60f91b57edea527c87bc38e74d204537d7f9f1a4b8"} Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.877055 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.978720 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-catalog-content\") pod \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.978834 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfb57\" (UniqueName: \"kubernetes.io/projected/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-kube-api-access-gfb57\") pod \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.978911 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-utilities\") pod \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\" (UID: \"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4\") " Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.979887 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-utilities" (OuterVolumeSpecName: "utilities") pod "14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" (UID: "14543fe1-7a55-41f4-ab2d-fa5727bcf0c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:19:53 crc kubenswrapper[4931]: I0130 06:19:53.986635 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-kube-api-access-gfb57" (OuterVolumeSpecName: "kube-api-access-gfb57") pod "14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" (UID: "14543fe1-7a55-41f4-ab2d-fa5727bcf0c4"). InnerVolumeSpecName "kube-api-access-gfb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.081305 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfb57\" (UniqueName: \"kubernetes.io/projected/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-kube-api-access-gfb57\") on node \"crc\" DevicePath \"\"" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.081364 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.169109 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" (UID: "14543fe1-7a55-41f4-ab2d-fa5727bcf0c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.183381 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.657361 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvwtk" event={"ID":"14543fe1-7a55-41f4-ab2d-fa5727bcf0c4","Type":"ContainerDied","Data":"f397bc428f5dd22c0df0143328bdde718f24dac9ab63094f67925c4d228021cd"} Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.657882 4931 scope.go:117] "RemoveContainer" containerID="8a43ab2a73d1194d386a7a60f91b57edea527c87bc38e74d204537d7f9f1a4b8" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.657527 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvwtk" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.705495 4931 scope.go:117] "RemoveContainer" containerID="f8f5eb6693ba1ad57ad1b7da16804037a17ede8e8520ed704cf51910ecbe0943" Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.726742 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvwtk"] Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.738689 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pvwtk"] Jan 30 06:19:54 crc kubenswrapper[4931]: I0130 06:19:54.745548 4931 scope.go:117] "RemoveContainer" containerID="b6286ad94de30681c2a835c7e301c6d1df98a96018bfe2306f9440b968c77016" Jan 30 06:19:55 crc kubenswrapper[4931]: I0130 06:19:55.442664 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" path="/var/lib/kubelet/pods/14543fe1-7a55-41f4-ab2d-fa5727bcf0c4/volumes" Jan 30 06:20:02 crc kubenswrapper[4931]: I0130 06:20:02.422164 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:20:02 crc kubenswrapper[4931]: E0130 06:20:02.423105 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.178241 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-mlqzd"] Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.188379 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-mlqzd"] Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.327953 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-h9twk"] Jan 30 06:20:04 crc kubenswrapper[4931]: E0130 06:20:04.328513 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="registry-server" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.328543 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="registry-server" Jan 30 06:20:04 crc kubenswrapper[4931]: E0130 06:20:04.328589 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="extract-utilities" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.328602 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="extract-utilities" Jan 30 06:20:04 crc kubenswrapper[4931]: E0130 06:20:04.328630 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="extract-content" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.328644 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="extract-content" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.328892 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="14543fe1-7a55-41f4-ab2d-fa5727bcf0c4" containerName="registry-server" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.329651 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.332813 4931 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-ff66z" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.333364 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.333556 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.334163 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.344488 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h9twk"] Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.395739 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-crc-storage\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.395826 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxzf\" (UniqueName: \"kubernetes.io/projected/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-kube-api-access-2rxzf\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.396125 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-node-mnt\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.498216 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-crc-storage\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.498311 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxzf\" (UniqueName: \"kubernetes.io/projected/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-kube-api-access-2rxzf\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.498520 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-node-mnt\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.498962 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-node-mnt\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.499496 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-crc-storage\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.520641 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxzf\" (UniqueName: \"kubernetes.io/projected/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-kube-api-access-2rxzf\") pod \"crc-storage-crc-h9twk\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:04 crc kubenswrapper[4931]: I0130 06:20:04.689230 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:05 crc kubenswrapper[4931]: I0130 06:20:05.015274 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-h9twk"] Jan 30 06:20:05 crc kubenswrapper[4931]: I0130 06:20:05.439418 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f395498-8955-4aa5-b283-62e5b12505f1" path="/var/lib/kubelet/pods/7f395498-8955-4aa5-b283-62e5b12505f1/volumes" Jan 30 06:20:05 crc kubenswrapper[4931]: I0130 06:20:05.761920 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h9twk" event={"ID":"edadf0b9-7f51-445f-8dd1-53dc9fae53aa","Type":"ContainerStarted","Data":"720bcba1a2545c662bf1fa3d80619562b3bc89fa22675dd3d667df179fa2299b"} Jan 30 06:20:06 crc kubenswrapper[4931]: I0130 06:20:06.775262 4931 generic.go:334] "Generic (PLEG): container finished" podID="edadf0b9-7f51-445f-8dd1-53dc9fae53aa" containerID="7d286e4ff9e3a5d29e83c4a7e4320e5360dd3ec6c72cd95a6b0fdf400bac7103" exitCode=0 Jan 30 06:20:06 crc kubenswrapper[4931]: I0130 06:20:06.775639 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h9twk" event={"ID":"edadf0b9-7f51-445f-8dd1-53dc9fae53aa","Type":"ContainerDied","Data":"7d286e4ff9e3a5d29e83c4a7e4320e5360dd3ec6c72cd95a6b0fdf400bac7103"} Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.151039 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.270387 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rxzf\" (UniqueName: \"kubernetes.io/projected/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-kube-api-access-2rxzf\") pod \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.270614 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-crc-storage\") pod \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.270696 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-node-mnt\") pod \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\" (UID: \"edadf0b9-7f51-445f-8dd1-53dc9fae53aa\") " Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.270915 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "edadf0b9-7f51-445f-8dd1-53dc9fae53aa" (UID: "edadf0b9-7f51-445f-8dd1-53dc9fae53aa"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.271306 4931 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.279407 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-kube-api-access-2rxzf" (OuterVolumeSpecName: "kube-api-access-2rxzf") pod "edadf0b9-7f51-445f-8dd1-53dc9fae53aa" (UID: "edadf0b9-7f51-445f-8dd1-53dc9fae53aa"). InnerVolumeSpecName "kube-api-access-2rxzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.304508 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "edadf0b9-7f51-445f-8dd1-53dc9fae53aa" (UID: "edadf0b9-7f51-445f-8dd1-53dc9fae53aa"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.372298 4931 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.372337 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rxzf\" (UniqueName: \"kubernetes.io/projected/edadf0b9-7f51-445f-8dd1-53dc9fae53aa-kube-api-access-2rxzf\") on node \"crc\" DevicePath \"\"" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.797291 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-h9twk" event={"ID":"edadf0b9-7f51-445f-8dd1-53dc9fae53aa","Type":"ContainerDied","Data":"720bcba1a2545c662bf1fa3d80619562b3bc89fa22675dd3d667df179fa2299b"} Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.797695 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720bcba1a2545c662bf1fa3d80619562b3bc89fa22675dd3d667df179fa2299b" Jan 30 06:20:08 crc kubenswrapper[4931]: I0130 06:20:08.797570 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-h9twk" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.555839 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-h9twk"] Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.566657 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-h9twk"] Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.725155 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-b8f6v"] Jan 30 06:20:10 crc kubenswrapper[4931]: E0130 06:20:10.725624 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edadf0b9-7f51-445f-8dd1-53dc9fae53aa" containerName="storage" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.725655 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="edadf0b9-7f51-445f-8dd1-53dc9fae53aa" containerName="storage" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.725893 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="edadf0b9-7f51-445f-8dd1-53dc9fae53aa" containerName="storage" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.726695 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.729325 4931 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-ff66z" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.730410 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.732132 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.739198 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.745710 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-b8f6v"] Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.812586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-crc-storage\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.812767 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw24d\" (UniqueName: \"kubernetes.io/projected/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-kube-api-access-nw24d\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.813116 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-node-mnt\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.914359 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-node-mnt\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.914491 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-crc-storage\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.914579 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw24d\" (UniqueName: \"kubernetes.io/projected/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-kube-api-access-nw24d\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.914781 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-node-mnt\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.915756 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-crc-storage\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:10 crc kubenswrapper[4931]: I0130 06:20:10.947961 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw24d\" (UniqueName: \"kubernetes.io/projected/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-kube-api-access-nw24d\") pod \"crc-storage-crc-b8f6v\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:11 crc kubenswrapper[4931]: I0130 06:20:11.057544 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:11 crc kubenswrapper[4931]: I0130 06:20:11.346665 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-b8f6v"] Jan 30 06:20:11 crc kubenswrapper[4931]: W0130 06:20:11.358831 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5d2f9de_7c7c_4cc3_9bce_e244b9d73535.slice/crio-1b8dfa464b7df25bb3e4d6dc1b8c41f38c19d57162b9bd74a5c4218406cbda56 WatchSource:0}: Error finding container 1b8dfa464b7df25bb3e4d6dc1b8c41f38c19d57162b9bd74a5c4218406cbda56: Status 404 returned error can't find the container with id 1b8dfa464b7df25bb3e4d6dc1b8c41f38c19d57162b9bd74a5c4218406cbda56 Jan 30 06:20:11 crc kubenswrapper[4931]: I0130 06:20:11.433860 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edadf0b9-7f51-445f-8dd1-53dc9fae53aa" path="/var/lib/kubelet/pods/edadf0b9-7f51-445f-8dd1-53dc9fae53aa/volumes" Jan 30 06:20:11 crc kubenswrapper[4931]: I0130 06:20:11.839548 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-b8f6v" event={"ID":"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535","Type":"ContainerStarted","Data":"1b8dfa464b7df25bb3e4d6dc1b8c41f38c19d57162b9bd74a5c4218406cbda56"} Jan 30 06:20:12 crc kubenswrapper[4931]: I0130 06:20:12.851858 4931 generic.go:334] "Generic (PLEG): container finished" podID="e5d2f9de-7c7c-4cc3-9bce-e244b9d73535" containerID="c6a166ae7c4660bbbdc44dc7c1b8670619ea5b26cb3c5e0fee1c8bbd8f4bb2af" exitCode=0 Jan 30 06:20:12 crc kubenswrapper[4931]: I0130 06:20:12.852014 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-b8f6v" event={"ID":"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535","Type":"ContainerDied","Data":"c6a166ae7c4660bbbdc44dc7c1b8670619ea5b26cb3c5e0fee1c8bbd8f4bb2af"} Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.225575 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.370888 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-crc-storage\") pod \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.370997 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-node-mnt\") pod \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.371102 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw24d\" (UniqueName: \"kubernetes.io/projected/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-kube-api-access-nw24d\") pod \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\" (UID: \"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535\") " Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.371564 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "e5d2f9de-7c7c-4cc3-9bce-e244b9d73535" (UID: "e5d2f9de-7c7c-4cc3-9bce-e244b9d73535"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.371805 4931 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.379418 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-kube-api-access-nw24d" (OuterVolumeSpecName: "kube-api-access-nw24d") pod "e5d2f9de-7c7c-4cc3-9bce-e244b9d73535" (UID: "e5d2f9de-7c7c-4cc3-9bce-e244b9d73535"). InnerVolumeSpecName "kube-api-access-nw24d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.401757 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "e5d2f9de-7c7c-4cc3-9bce-e244b9d73535" (UID: "e5d2f9de-7c7c-4cc3-9bce-e244b9d73535"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.473371 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw24d\" (UniqueName: \"kubernetes.io/projected/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-kube-api-access-nw24d\") on node \"crc\" DevicePath \"\"" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.473407 4931 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e5d2f9de-7c7c-4cc3-9bce-e244b9d73535-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.872782 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-b8f6v" event={"ID":"e5d2f9de-7c7c-4cc3-9bce-e244b9d73535","Type":"ContainerDied","Data":"1b8dfa464b7df25bb3e4d6dc1b8c41f38c19d57162b9bd74a5c4218406cbda56"} Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.872842 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b8dfa464b7df25bb3e4d6dc1b8c41f38c19d57162b9bd74a5c4218406cbda56" Jan 30 06:20:14 crc kubenswrapper[4931]: I0130 06:20:14.872858 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-b8f6v" Jan 30 06:20:16 crc kubenswrapper[4931]: I0130 06:20:16.422621 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:20:16 crc kubenswrapper[4931]: E0130 06:20:16.423554 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:20:31 crc kubenswrapper[4931]: I0130 06:20:31.421861 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:20:31 crc kubenswrapper[4931]: E0130 06:20:31.423160 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:20:46 crc kubenswrapper[4931]: I0130 06:20:46.422273 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:20:46 crc kubenswrapper[4931]: E0130 06:20:46.423807 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:20:53 crc kubenswrapper[4931]: I0130 06:20:53.351472 4931 scope.go:117] "RemoveContainer" containerID="ceeb8bcdff334f1b3490e1ee30443dff7dd6fd17a3f2d90428a1f38ad6f3cd5e" Jan 30 06:21:00 crc kubenswrapper[4931]: I0130 06:21:00.424058 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:21:00 crc kubenswrapper[4931]: E0130 06:21:00.426250 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:21:12 crc kubenswrapper[4931]: I0130 06:21:12.422614 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:21:12 crc kubenswrapper[4931]: E0130 06:21:12.423841 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:21:24 crc kubenswrapper[4931]: I0130 06:21:24.422476 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:21:24 crc kubenswrapper[4931]: E0130 06:21:24.423852 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:21:36 crc kubenswrapper[4931]: I0130 06:21:36.421735 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:21:36 crc kubenswrapper[4931]: E0130 06:21:36.422773 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:21:48 crc kubenswrapper[4931]: I0130 06:21:48.421860 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:21:48 crc kubenswrapper[4931]: E0130 06:21:48.422763 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:22:02 crc kubenswrapper[4931]: I0130 06:22:02.422829 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:22:02 crc kubenswrapper[4931]: E0130 06:22:02.423878 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:22:13 crc kubenswrapper[4931]: I0130 06:22:13.422192 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:22:13 crc kubenswrapper[4931]: E0130 06:22:13.423227 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:22:26 crc kubenswrapper[4931]: I0130 06:22:26.422950 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:22:26 crc kubenswrapper[4931]: E0130 06:22:26.424385 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:22:37 crc kubenswrapper[4931]: I0130 06:22:37.426028 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:22:37 crc kubenswrapper[4931]: E0130 06:22:37.427015 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:22:48 crc kubenswrapper[4931]: I0130 06:22:48.423013 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:22:48 crc kubenswrapper[4931]: E0130 06:22:48.424276 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:23:01 crc kubenswrapper[4931]: I0130 06:23:01.422098 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:23:01 crc kubenswrapper[4931]: E0130 06:23:01.423143 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:23:13 crc kubenswrapper[4931]: I0130 06:23:13.422145 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:23:13 crc kubenswrapper[4931]: E0130 06:23:13.423266 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.324939 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-x7g4t"] Jan 30 06:23:22 crc kubenswrapper[4931]: E0130 06:23:22.325598 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d2f9de-7c7c-4cc3-9bce-e244b9d73535" containerName="storage" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.325612 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d2f9de-7c7c-4cc3-9bce-e244b9d73535" containerName="storage" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.325732 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d2f9de-7c7c-4cc3-9bce-e244b9d73535" containerName="storage" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.326606 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.329257 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.329479 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.329711 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.331448 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ccbcs" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.335377 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-x7g4t"] Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.336354 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.425403 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lncd9\" (UniqueName: \"kubernetes.io/projected/6aa8cfa6-8d93-4f4c-844e-f180daf03802-kube-api-access-lncd9\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.425463 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-config\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.426062 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-dns-svc\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.527436 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-config\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.527771 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-dns-svc\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.527802 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lncd9\" (UniqueName: \"kubernetes.io/projected/6aa8cfa6-8d93-4f4c-844e-f180daf03802-kube-api-access-lncd9\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.528578 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-dns-svc\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.532213 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-config\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.558275 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-7dq6z"] Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.559448 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.563512 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lncd9\" (UniqueName: \"kubernetes.io/projected/6aa8cfa6-8d93-4f4c-844e-f180daf03802-kube-api-access-lncd9\") pod \"dnsmasq-dns-95587bc99-x7g4t\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.613482 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-7dq6z"] Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.646946 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.734039 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2x7\" (UniqueName: \"kubernetes.io/projected/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-kube-api-access-5l2x7\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.734127 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-config\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.734154 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.836571 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2x7\" (UniqueName: \"kubernetes.io/projected/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-kube-api-access-5l2x7\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.836891 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-config\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.836935 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.837905 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.837998 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-config\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.903677 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2x7\" (UniqueName: \"kubernetes.io/projected/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-kube-api-access-5l2x7\") pod \"dnsmasq-dns-5d79f765b5-7dq6z\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:22 crc kubenswrapper[4931]: I0130 06:23:22.916458 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.120062 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-x7g4t"] Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.327058 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-7dq6z"] Jan 30 06:23:23 crc kubenswrapper[4931]: W0130 06:23:23.358108 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda43ae7a0_f3ca_4eb9_ae23_e6dc8f4e5a1c.slice/crio-df23f9bc8284d347a4dda805a55b9a64a2739f707161ba449d8a6399ee4d3665 WatchSource:0}: Error finding container df23f9bc8284d347a4dda805a55b9a64a2739f707161ba449d8a6399ee4d3665: Status 404 returned error can't find the container with id df23f9bc8284d347a4dda805a55b9a64a2739f707161ba449d8a6399ee4d3665 Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.452365 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.453783 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.457948 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.457971 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.458189 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.458273 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mgh9s" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.458477 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.469314 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650415 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650504 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650550 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ffda181-212b-42f4-bd56-9ab2864ded3c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650593 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650614 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ffda181-212b-42f4-bd56-9ab2864ded3c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650713 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650928 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ngdz\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-kube-api-access-2ngdz\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.650967 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.651059 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.705982 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.707110 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.710636 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.710750 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.711348 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wgz6z" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.712145 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.712405 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.729310 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752059 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ngdz\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-kube-api-access-2ngdz\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752108 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752152 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752188 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752212 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752237 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ffda181-212b-42f4-bd56-9ab2864ded3c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752255 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752270 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ffda181-212b-42f4-bd56-9ab2864ded3c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752295 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.752758 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.754157 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.755616 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.756035 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.756927 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.762876 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ffda181-212b-42f4-bd56-9ab2864ded3c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.763344 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ffda181-212b-42f4-bd56-9ab2864ded3c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.773302 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.773353 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/96f6795482cf208a0436fcedd4e13f5ef58c9a3e2d9d6166beea188ab34f9e81/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.779568 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ngdz\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-kube-api-access-2ngdz\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.808802 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " pod="openstack/rabbitmq-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853185 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853237 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853290 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853495 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3e54229-729f-4bfc-a208-dc39edc35b8a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3e54229-729f-4bfc-a208-dc39edc35b8a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853631 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bknnm\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-kube-api-access-bknnm\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853653 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853681 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.853745 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.930931 4931 generic.go:334] "Generic (PLEG): container finished" podID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerID="85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7" exitCode=0 Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.930995 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" event={"ID":"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c","Type":"ContainerDied","Data":"85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7"} Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.931338 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" event={"ID":"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c","Type":"ContainerStarted","Data":"df23f9bc8284d347a4dda805a55b9a64a2739f707161ba449d8a6399ee4d3665"} Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.932907 4931 generic.go:334] "Generic (PLEG): container finished" podID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerID="05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5" exitCode=0 Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.932941 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" event={"ID":"6aa8cfa6-8d93-4f4c-844e-f180daf03802","Type":"ContainerDied","Data":"05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5"} Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.932986 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" event={"ID":"6aa8cfa6-8d93-4f4c-844e-f180daf03802","Type":"ContainerStarted","Data":"1d3b909cce0f1614532a2470422ac028e30b8c0cc10d3660b10b49f6654468a0"} Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.955545 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.955722 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.955834 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.955931 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3e54229-729f-4bfc-a208-dc39edc35b8a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.956011 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3e54229-729f-4bfc-a208-dc39edc35b8a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.956119 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bknnm\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-kube-api-access-bknnm\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.956227 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.956304 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.956404 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.958072 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.960400 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.961768 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.963163 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.965874 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.967488 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.967532 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/081bf8f84c60dfb918ea0eb5418be09e59105cf3295dde894d1b133731bc6391/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.967843 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3e54229-729f-4bfc-a208-dc39edc35b8a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.980580 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bknnm\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-kube-api-access-bknnm\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:23 crc kubenswrapper[4931]: I0130 06:23:23.984301 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3e54229-729f-4bfc-a208-dc39edc35b8a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.017331 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.030198 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.104409 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.422666 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:23:24 crc kubenswrapper[4931]: E0130 06:23:24.423614 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.502627 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:23:24 crc kubenswrapper[4931]: W0130 06:23:24.506355 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e54229_729f_4bfc_a208_dc39edc35b8a.slice/crio-8795a34898e6841b11366ed2b6c1d443c46d48e62ae64548732f8a0d5ccd026d WatchSource:0}: Error finding container 8795a34898e6841b11366ed2b6c1d443c46d48e62ae64548732f8a0d5ccd026d: Status 404 returned error can't find the container with id 8795a34898e6841b11366ed2b6c1d443c46d48e62ae64548732f8a0d5ccd026d Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.570690 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:23:24 crc kubenswrapper[4931]: W0130 06:23:24.588273 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ffda181_212b_42f4_bd56_9ab2864ded3c.slice/crio-d1546caee0e50a6ac2e387f8e39e4451c82f903d75d2682ba4430ed473fac38f WatchSource:0}: Error finding container d1546caee0e50a6ac2e387f8e39e4451c82f903d75d2682ba4430ed473fac38f: Status 404 returned error can't find the container with id d1546caee0e50a6ac2e387f8e39e4451c82f903d75d2682ba4430ed473fac38f Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.774324 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.776469 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.778871 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x2rnl" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.778884 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.779106 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.780290 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.788608 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.790872 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.881966 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.882008 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.882042 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-kolla-config\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.882078 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.882104 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-config-data-default\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.882298 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.882399 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2cw\" (UniqueName: \"kubernetes.io/projected/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-kube-api-access-mh2cw\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.882484 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.961173 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" event={"ID":"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c","Type":"ContainerStarted","Data":"8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46"} Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.963495 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.966772 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" event={"ID":"6aa8cfa6-8d93-4f4c-844e-f180daf03802","Type":"ContainerStarted","Data":"550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316"} Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.967728 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.968680 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ffda181-212b-42f4-bd56-9ab2864ded3c","Type":"ContainerStarted","Data":"d1546caee0e50a6ac2e387f8e39e4451c82f903d75d2682ba4430ed473fac38f"} Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.969890 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3e54229-729f-4bfc-a208-dc39edc35b8a","Type":"ContainerStarted","Data":"8795a34898e6841b11366ed2b6c1d443c46d48e62ae64548732f8a0d5ccd026d"} Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.984818 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.984915 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.984942 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.984976 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-kolla-config\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.985017 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.985048 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-config-data-default\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.985100 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.985133 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2cw\" (UniqueName: \"kubernetes.io/projected/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-kube-api-access-mh2cw\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.986692 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-kolla-config\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.986914 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.988866 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-config-data-default\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:24 crc kubenswrapper[4931]: I0130 06:23:24.989037 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.000758 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.000828 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fcebf3741f68bcc13c2190545ffc3184ab1144655693389ba5c7ca5ba7137f7b/globalmount\"" pod="openstack/openstack-galera-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.009844 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" podStartSLOduration=3.009815087 podStartE2EDuration="3.009815087s" podCreationTimestamp="2026-01-30 06:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:23:24.998919272 +0000 UTC m=+4540.368829539" watchObservedRunningTime="2026-01-30 06:23:25.009815087 +0000 UTC m=+4540.379725384" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.013538 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.019471 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.022818 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2cw\" (UniqueName: \"kubernetes.io/projected/01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3-kube-api-access-mh2cw\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.031559 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" podStartSLOduration=3.031529184 podStartE2EDuration="3.031529184s" podCreationTimestamp="2026-01-30 06:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:23:25.025795373 +0000 UTC m=+4540.395705670" watchObservedRunningTime="2026-01-30 06:23:25.031529184 +0000 UTC m=+4540.401439461" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.061169 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-369a6bb3-0312-4fe7-ba1c-d7defd15de60\") pod \"openstack-galera-0\" (UID: \"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3\") " pod="openstack/openstack-galera-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.099448 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.390491 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.391856 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.394354 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.394410 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4wtw7" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.404976 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.500756 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c3365d-6967-42e2-b00c-887a82a1b73e-config-data\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.500829 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5c3365d-6967-42e2-b00c-887a82a1b73e-kolla-config\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.500857 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hln\" (UniqueName: \"kubernetes.io/projected/f5c3365d-6967-42e2-b00c-887a82a1b73e-kube-api-access-b7hln\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.527065 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.602991 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5c3365d-6967-42e2-b00c-887a82a1b73e-kolla-config\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.603086 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7hln\" (UniqueName: \"kubernetes.io/projected/f5c3365d-6967-42e2-b00c-887a82a1b73e-kube-api-access-b7hln\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.603252 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c3365d-6967-42e2-b00c-887a82a1b73e-config-data\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.603860 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5c3365d-6967-42e2-b00c-887a82a1b73e-kolla-config\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.604362 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c3365d-6967-42e2-b00c-887a82a1b73e-config-data\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.706522 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7hln\" (UniqueName: \"kubernetes.io/projected/f5c3365d-6967-42e2-b00c-887a82a1b73e-kube-api-access-b7hln\") pod \"memcached-0\" (UID: \"f5c3365d-6967-42e2-b00c-887a82a1b73e\") " pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.710581 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.982277 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3e54229-729f-4bfc-a208-dc39edc35b8a","Type":"ContainerStarted","Data":"ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f"} Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.984356 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ffda181-212b-42f4-bd56-9ab2864ded3c","Type":"ContainerStarted","Data":"1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee"} Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.987206 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3","Type":"ContainerStarted","Data":"347f4a99c878c60ced04fec19dba23244dbe8505c061092184cd7fe73d781eff"} Jan 30 06:23:25 crc kubenswrapper[4931]: I0130 06:23:25.987252 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3","Type":"ContainerStarted","Data":"bfffff77e88f4155244cfa8d546485a9a12b5f3703b432b1c04b887ff976bbd6"} Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.195525 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 06:23:26 crc kubenswrapper[4931]: W0130 06:23:26.198786 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5c3365d_6967_42e2_b00c_887a82a1b73e.slice/crio-7d3f316e2e1f08c4867ec8dfa647ae413af47ae1b01c28d4c9d377e88fe5ee34 WatchSource:0}: Error finding container 7d3f316e2e1f08c4867ec8dfa647ae413af47ae1b01c28d4c9d377e88fe5ee34: Status 404 returned error can't find the container with id 7d3f316e2e1f08c4867ec8dfa647ae413af47ae1b01c28d4c9d377e88fe5ee34 Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.451056 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.452436 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.454783 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2tv2r" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.457781 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.457926 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.458959 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.487189 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.620534 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66165b19-dfc8-403f-ae09-30299db6b19f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.621872 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.622057 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165b19-dfc8-403f-ae09-30299db6b19f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.622241 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.622354 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165b19-dfc8-403f-ae09-30299db6b19f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.622551 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtp4h\" (UniqueName: \"kubernetes.io/projected/66165b19-dfc8-403f-ae09-30299db6b19f-kube-api-access-dtp4h\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.622710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.622831 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724212 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtp4h\" (UniqueName: \"kubernetes.io/projected/66165b19-dfc8-403f-ae09-30299db6b19f-kube-api-access-dtp4h\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724317 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724377 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724480 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66165b19-dfc8-403f-ae09-30299db6b19f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724552 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724597 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165b19-dfc8-403f-ae09-30299db6b19f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724648 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.724684 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165b19-dfc8-403f-ae09-30299db6b19f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.725386 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/66165b19-dfc8-403f-ae09-30299db6b19f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.726628 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.726676 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.728689 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66165b19-dfc8-403f-ae09-30299db6b19f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.732564 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66165b19-dfc8-403f-ae09-30299db6b19f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.734082 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.734138 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ee78c3c6a638a66d2e564da262d5ca34b72507ff7765ff278d9157cb38212396/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.735302 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/66165b19-dfc8-403f-ae09-30299db6b19f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.743090 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtp4h\" (UniqueName: \"kubernetes.io/projected/66165b19-dfc8-403f-ae09-30299db6b19f-kube-api-access-dtp4h\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.761134 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52ee0189-780a-4812-b1e3-fbd3952c4c00\") pod \"openstack-cell1-galera-0\" (UID: \"66165b19-dfc8-403f-ae09-30299db6b19f\") " pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.811681 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.996575 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f5c3365d-6967-42e2-b00c-887a82a1b73e","Type":"ContainerStarted","Data":"50f2dbb718a935ab17cb4ed5b180bca0eb84c7ca8339c04e4870c7611e9d5001"} Jan 30 06:23:26 crc kubenswrapper[4931]: I0130 06:23:26.997029 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f5c3365d-6967-42e2-b00c-887a82a1b73e","Type":"ContainerStarted","Data":"7d3f316e2e1f08c4867ec8dfa647ae413af47ae1b01c28d4c9d377e88fe5ee34"} Jan 30 06:23:27 crc kubenswrapper[4931]: I0130 06:23:27.020102 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.020079351 podStartE2EDuration="2.020079351s" podCreationTimestamp="2026-01-30 06:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:23:27.01397134 +0000 UTC m=+4542.383881607" watchObservedRunningTime="2026-01-30 06:23:27.020079351 +0000 UTC m=+4542.389989608" Jan 30 06:23:27 crc kubenswrapper[4931]: W0130 06:23:27.282861 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66165b19_dfc8_403f_ae09_30299db6b19f.slice/crio-a8e6ee76c5e248997766131dd53dfc389bc3138a1466ea69cca0e56dfead52e1 WatchSource:0}: Error finding container a8e6ee76c5e248997766131dd53dfc389bc3138a1466ea69cca0e56dfead52e1: Status 404 returned error can't find the container with id a8e6ee76c5e248997766131dd53dfc389bc3138a1466ea69cca0e56dfead52e1 Jan 30 06:23:27 crc kubenswrapper[4931]: I0130 06:23:27.283967 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 06:23:28 crc kubenswrapper[4931]: I0130 06:23:28.009114 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66165b19-dfc8-403f-ae09-30299db6b19f","Type":"ContainerStarted","Data":"8589e17dde43a588a9507ad40fe1ffee93a647ef1321ae9eab7554654bab0d99"} Jan 30 06:23:28 crc kubenswrapper[4931]: I0130 06:23:28.009724 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 06:23:28 crc kubenswrapper[4931]: I0130 06:23:28.009760 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66165b19-dfc8-403f-ae09-30299db6b19f","Type":"ContainerStarted","Data":"a8e6ee76c5e248997766131dd53dfc389bc3138a1466ea69cca0e56dfead52e1"} Jan 30 06:23:30 crc kubenswrapper[4931]: I0130 06:23:30.027058 4931 generic.go:334] "Generic (PLEG): container finished" podID="01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3" containerID="347f4a99c878c60ced04fec19dba23244dbe8505c061092184cd7fe73d781eff" exitCode=0 Jan 30 06:23:30 crc kubenswrapper[4931]: I0130 06:23:30.027135 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3","Type":"ContainerDied","Data":"347f4a99c878c60ced04fec19dba23244dbe8505c061092184cd7fe73d781eff"} Jan 30 06:23:31 crc kubenswrapper[4931]: I0130 06:23:31.039580 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3","Type":"ContainerStarted","Data":"7b543a983037944ab2115585669090d4c1c4c5d04dfcf7437337b1e278dd9fcf"} Jan 30 06:23:31 crc kubenswrapper[4931]: I0130 06:23:31.079610 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.079584641 podStartE2EDuration="8.079584641s" podCreationTimestamp="2026-01-30 06:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:23:31.068651425 +0000 UTC m=+4546.438561712" watchObservedRunningTime="2026-01-30 06:23:31.079584641 +0000 UTC m=+4546.449494938" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.052515 4931 generic.go:334] "Generic (PLEG): container finished" podID="66165b19-dfc8-403f-ae09-30299db6b19f" containerID="8589e17dde43a588a9507ad40fe1ffee93a647ef1321ae9eab7554654bab0d99" exitCode=0 Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.052616 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66165b19-dfc8-403f-ae09-30299db6b19f","Type":"ContainerDied","Data":"8589e17dde43a588a9507ad40fe1ffee93a647ef1321ae9eab7554654bab0d99"} Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.413709 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44rwk"] Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.416134 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.423056 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44rwk"] Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.546598 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjt8j\" (UniqueName: \"kubernetes.io/projected/a76cda73-9eb5-4a03-aa82-713af868b080-kube-api-access-pjt8j\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.546724 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-catalog-content\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.546769 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-utilities\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.648389 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjt8j\" (UniqueName: \"kubernetes.io/projected/a76cda73-9eb5-4a03-aa82-713af868b080-kube-api-access-pjt8j\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.648480 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-catalog-content\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.648508 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-utilities\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.648676 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.648924 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-utilities\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.649081 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-catalog-content\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.684244 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjt8j\" (UniqueName: \"kubernetes.io/projected/a76cda73-9eb5-4a03-aa82-713af868b080-kube-api-access-pjt8j\") pod \"redhat-marketplace-44rwk\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.733630 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.917543 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:23:32 crc kubenswrapper[4931]: I0130 06:23:32.997839 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-x7g4t"] Jan 30 06:23:33 crc kubenswrapper[4931]: I0130 06:23:33.061179 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" podUID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerName="dnsmasq-dns" containerID="cri-o://550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316" gracePeriod=10 Jan 30 06:23:33 crc kubenswrapper[4931]: I0130 06:23:33.061489 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"66165b19-dfc8-403f-ae09-30299db6b19f","Type":"ContainerStarted","Data":"c8da065dc67a7d340a0bfa87355c64913e5f8e8af5dcfd198ef08fb14f210ff4"} Jan 30 06:23:33 crc kubenswrapper[4931]: I0130 06:23:33.086128 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.086111602 podStartE2EDuration="8.086111602s" podCreationTimestamp="2026-01-30 06:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:23:33.081995786 +0000 UTC m=+4548.451906043" watchObservedRunningTime="2026-01-30 06:23:33.086111602 +0000 UTC m=+4548.456021859" Jan 30 06:23:33 crc kubenswrapper[4931]: I0130 06:23:33.205185 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44rwk"] Jan 30 06:23:33 crc kubenswrapper[4931]: W0130 06:23:33.710860 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda76cda73_9eb5_4a03_aa82_713af868b080.slice/crio-13f9e353d857817a966433846a45a41341967cdad2202309b77ec7da31ff2c31 WatchSource:0}: Error finding container 13f9e353d857817a966433846a45a41341967cdad2202309b77ec7da31ff2c31: Status 404 returned error can't find the container with id 13f9e353d857817a966433846a45a41341967cdad2202309b77ec7da31ff2c31 Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.003099 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.071243 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lncd9\" (UniqueName: \"kubernetes.io/projected/6aa8cfa6-8d93-4f4c-844e-f180daf03802-kube-api-access-lncd9\") pod \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.071376 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-config\") pod \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.071529 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-dns-svc\") pod \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\" (UID: \"6aa8cfa6-8d93-4f4c-844e-f180daf03802\") " Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.079108 4931 generic.go:334] "Generic (PLEG): container finished" podID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerID="550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316" exitCode=0 Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.079474 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.080078 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" event={"ID":"6aa8cfa6-8d93-4f4c-844e-f180daf03802","Type":"ContainerDied","Data":"550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316"} Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.080113 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-x7g4t" event={"ID":"6aa8cfa6-8d93-4f4c-844e-f180daf03802","Type":"ContainerDied","Data":"1d3b909cce0f1614532a2470422ac028e30b8c0cc10d3660b10b49f6654468a0"} Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.080134 4931 scope.go:117] "RemoveContainer" containerID="550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.081475 4931 generic.go:334] "Generic (PLEG): container finished" podID="a76cda73-9eb5-4a03-aa82-713af868b080" containerID="afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd" exitCode=0 Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.081497 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44rwk" event={"ID":"a76cda73-9eb5-4a03-aa82-713af868b080","Type":"ContainerDied","Data":"afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd"} Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.081515 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44rwk" event={"ID":"a76cda73-9eb5-4a03-aa82-713af868b080","Type":"ContainerStarted","Data":"13f9e353d857817a966433846a45a41341967cdad2202309b77ec7da31ff2c31"} Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.103694 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa8cfa6-8d93-4f4c-844e-f180daf03802-kube-api-access-lncd9" (OuterVolumeSpecName: "kube-api-access-lncd9") pod "6aa8cfa6-8d93-4f4c-844e-f180daf03802" (UID: "6aa8cfa6-8d93-4f4c-844e-f180daf03802"). InnerVolumeSpecName "kube-api-access-lncd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.131078 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6aa8cfa6-8d93-4f4c-844e-f180daf03802" (UID: "6aa8cfa6-8d93-4f4c-844e-f180daf03802"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.137139 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-config" (OuterVolumeSpecName: "config") pod "6aa8cfa6-8d93-4f4c-844e-f180daf03802" (UID: "6aa8cfa6-8d93-4f4c-844e-f180daf03802"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.174449 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.174501 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lncd9\" (UniqueName: \"kubernetes.io/projected/6aa8cfa6-8d93-4f4c-844e-f180daf03802-kube-api-access-lncd9\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.174522 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa8cfa6-8d93-4f4c-844e-f180daf03802-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.189521 4931 scope.go:117] "RemoveContainer" containerID="05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.208016 4931 scope.go:117] "RemoveContainer" containerID="550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316" Jan 30 06:23:34 crc kubenswrapper[4931]: E0130 06:23:34.208495 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316\": container with ID starting with 550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316 not found: ID does not exist" containerID="550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.208525 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316"} err="failed to get container status \"550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316\": rpc error: code = NotFound desc = could not find container \"550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316\": container with ID starting with 550a3fe365e2455dfd0d4c3bd533972ce18601eb9855307b54e4d35d258dd316 not found: ID does not exist" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.208543 4931 scope.go:117] "RemoveContainer" containerID="05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5" Jan 30 06:23:34 crc kubenswrapper[4931]: E0130 06:23:34.209073 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5\": container with ID starting with 05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5 not found: ID does not exist" containerID="05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.209095 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5"} err="failed to get container status \"05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5\": rpc error: code = NotFound desc = could not find container \"05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5\": container with ID starting with 05364d77fba2566dc0fb047fc3895d7c2ab607cd60db2040c9304dc2551d05d5 not found: ID does not exist" Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.442045 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-x7g4t"] Jan 30 06:23:34 crc kubenswrapper[4931]: I0130 06:23:34.454213 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-x7g4t"] Jan 30 06:23:35 crc kubenswrapper[4931]: I0130 06:23:35.099965 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 06:23:35 crc kubenswrapper[4931]: I0130 06:23:35.100724 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 06:23:35 crc kubenswrapper[4931]: I0130 06:23:35.113114 4931 generic.go:334] "Generic (PLEG): container finished" podID="a76cda73-9eb5-4a03-aa82-713af868b080" containerID="f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587" exitCode=0 Jan 30 06:23:35 crc kubenswrapper[4931]: I0130 06:23:35.113163 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44rwk" event={"ID":"a76cda73-9eb5-4a03-aa82-713af868b080","Type":"ContainerDied","Data":"f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587"} Jan 30 06:23:35 crc kubenswrapper[4931]: I0130 06:23:35.433326 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" path="/var/lib/kubelet/pods/6aa8cfa6-8d93-4f4c-844e-f180daf03802/volumes" Jan 30 06:23:35 crc kubenswrapper[4931]: I0130 06:23:35.712236 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 06:23:36 crc kubenswrapper[4931]: I0130 06:23:36.127070 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44rwk" event={"ID":"a76cda73-9eb5-4a03-aa82-713af868b080","Type":"ContainerStarted","Data":"37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35"} Jan 30 06:23:36 crc kubenswrapper[4931]: I0130 06:23:36.152829 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44rwk" podStartSLOduration=2.585329306 podStartE2EDuration="4.152801139s" podCreationTimestamp="2026-01-30 06:23:32 +0000 UTC" firstStartedPulling="2026-01-30 06:23:34.083238266 +0000 UTC m=+4549.453148533" lastFinishedPulling="2026-01-30 06:23:35.650710069 +0000 UTC m=+4551.020620366" observedRunningTime="2026-01-30 06:23:36.150272628 +0000 UTC m=+4551.520182885" watchObservedRunningTime="2026-01-30 06:23:36.152801139 +0000 UTC m=+4551.522711396" Jan 30 06:23:36 crc kubenswrapper[4931]: I0130 06:23:36.813085 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:36 crc kubenswrapper[4931]: I0130 06:23:36.813141 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:37 crc kubenswrapper[4931]: I0130 06:23:37.933861 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 06:23:38 crc kubenswrapper[4931]: I0130 06:23:38.053529 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 06:23:38 crc kubenswrapper[4931]: I0130 06:23:38.423110 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:23:38 crc kubenswrapper[4931]: E0130 06:23:38.423617 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:23:39 crc kubenswrapper[4931]: I0130 06:23:39.387193 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:39 crc kubenswrapper[4931]: I0130 06:23:39.517933 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 06:23:42 crc kubenswrapper[4931]: I0130 06:23:42.734651 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:42 crc kubenswrapper[4931]: I0130 06:23:42.735021 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:42 crc kubenswrapper[4931]: I0130 06:23:42.809894 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.272354 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.324190 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44rwk"] Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.751527 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hfrzc"] Jan 30 06:23:43 crc kubenswrapper[4931]: E0130 06:23:43.751954 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerName="init" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.751977 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerName="init" Jan 30 06:23:43 crc kubenswrapper[4931]: E0130 06:23:43.752007 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerName="dnsmasq-dns" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.752021 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerName="dnsmasq-dns" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.752324 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa8cfa6-8d93-4f4c-844e-f180daf03802" containerName="dnsmasq-dns" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.753282 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.759822 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.772353 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hfrzc"] Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.842847 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zn5\" (UniqueName: \"kubernetes.io/projected/fe686ff0-2117-48c0-bde6-41faa75e59b7-kube-api-access-l6zn5\") pod \"root-account-create-update-hfrzc\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.842925 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe686ff0-2117-48c0-bde6-41faa75e59b7-operator-scripts\") pod \"root-account-create-update-hfrzc\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.944877 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zn5\" (UniqueName: \"kubernetes.io/projected/fe686ff0-2117-48c0-bde6-41faa75e59b7-kube-api-access-l6zn5\") pod \"root-account-create-update-hfrzc\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.945010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe686ff0-2117-48c0-bde6-41faa75e59b7-operator-scripts\") pod \"root-account-create-update-hfrzc\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.946398 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe686ff0-2117-48c0-bde6-41faa75e59b7-operator-scripts\") pod \"root-account-create-update-hfrzc\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:43 crc kubenswrapper[4931]: I0130 06:23:43.979521 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zn5\" (UniqueName: \"kubernetes.io/projected/fe686ff0-2117-48c0-bde6-41faa75e59b7-kube-api-access-l6zn5\") pod \"root-account-create-update-hfrzc\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:44 crc kubenswrapper[4931]: I0130 06:23:44.082777 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:44 crc kubenswrapper[4931]: I0130 06:23:44.668666 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hfrzc"] Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.231140 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe686ff0-2117-48c0-bde6-41faa75e59b7" containerID="71a8124b599814d410f6d79c3260191d88bc5b88a1405b9bfb832aebcb013dc4" exitCode=0 Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.231220 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hfrzc" event={"ID":"fe686ff0-2117-48c0-bde6-41faa75e59b7","Type":"ContainerDied","Data":"71a8124b599814d410f6d79c3260191d88bc5b88a1405b9bfb832aebcb013dc4"} Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.231711 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hfrzc" event={"ID":"fe686ff0-2117-48c0-bde6-41faa75e59b7","Type":"ContainerStarted","Data":"2035b861086e20c1fb342c823c4eb82d86fc588a7b4c49c61d68ad62b4ba70ac"} Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.231930 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-44rwk" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="registry-server" containerID="cri-o://37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35" gracePeriod=2 Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.864546 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.986646 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-catalog-content\") pod \"a76cda73-9eb5-4a03-aa82-713af868b080\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.986725 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjt8j\" (UniqueName: \"kubernetes.io/projected/a76cda73-9eb5-4a03-aa82-713af868b080-kube-api-access-pjt8j\") pod \"a76cda73-9eb5-4a03-aa82-713af868b080\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.986768 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-utilities\") pod \"a76cda73-9eb5-4a03-aa82-713af868b080\" (UID: \"a76cda73-9eb5-4a03-aa82-713af868b080\") " Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.987764 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-utilities" (OuterVolumeSpecName: "utilities") pod "a76cda73-9eb5-4a03-aa82-713af868b080" (UID: "a76cda73-9eb5-4a03-aa82-713af868b080"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:23:45 crc kubenswrapper[4931]: I0130 06:23:45.999785 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76cda73-9eb5-4a03-aa82-713af868b080-kube-api-access-pjt8j" (OuterVolumeSpecName: "kube-api-access-pjt8j") pod "a76cda73-9eb5-4a03-aa82-713af868b080" (UID: "a76cda73-9eb5-4a03-aa82-713af868b080"). InnerVolumeSpecName "kube-api-access-pjt8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.027784 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a76cda73-9eb5-4a03-aa82-713af868b080" (UID: "a76cda73-9eb5-4a03-aa82-713af868b080"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.089004 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjt8j\" (UniqueName: \"kubernetes.io/projected/a76cda73-9eb5-4a03-aa82-713af868b080-kube-api-access-pjt8j\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.089041 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.089057 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a76cda73-9eb5-4a03-aa82-713af868b080-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.247276 4931 generic.go:334] "Generic (PLEG): container finished" podID="a76cda73-9eb5-4a03-aa82-713af868b080" containerID="37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35" exitCode=0 Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.247353 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44rwk" event={"ID":"a76cda73-9eb5-4a03-aa82-713af868b080","Type":"ContainerDied","Data":"37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35"} Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.247454 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44rwk" event={"ID":"a76cda73-9eb5-4a03-aa82-713af868b080","Type":"ContainerDied","Data":"13f9e353d857817a966433846a45a41341967cdad2202309b77ec7da31ff2c31"} Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.247487 4931 scope.go:117] "RemoveContainer" containerID="37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.247865 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44rwk" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.289495 4931 scope.go:117] "RemoveContainer" containerID="f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.309974 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44rwk"] Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.321880 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-44rwk"] Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.816348 4931 scope.go:117] "RemoveContainer" containerID="afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.905291 4931 scope.go:117] "RemoveContainer" containerID="37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35" Jan 30 06:23:46 crc kubenswrapper[4931]: E0130 06:23:46.905929 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35\": container with ID starting with 37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35 not found: ID does not exist" containerID="37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.905981 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35"} err="failed to get container status \"37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35\": rpc error: code = NotFound desc = could not find container \"37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35\": container with ID starting with 37a60ee51f1eacc8f4a70b4bc576ee6a4ef069c876a9fe53fe08b8bd96e2ba35 not found: ID does not exist" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.906018 4931 scope.go:117] "RemoveContainer" containerID="f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587" Jan 30 06:23:46 crc kubenswrapper[4931]: E0130 06:23:46.907584 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587\": container with ID starting with f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587 not found: ID does not exist" containerID="f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.907647 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587"} err="failed to get container status \"f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587\": rpc error: code = NotFound desc = could not find container \"f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587\": container with ID starting with f4f135e27efe07e242a7194bef57d51e8baf7e9411831f604ac271ebfa69f587 not found: ID does not exist" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.907709 4931 scope.go:117] "RemoveContainer" containerID="afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd" Jan 30 06:23:46 crc kubenswrapper[4931]: E0130 06:23:46.908222 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd\": container with ID starting with afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd not found: ID does not exist" containerID="afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.908315 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd"} err="failed to get container status \"afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd\": rpc error: code = NotFound desc = could not find container \"afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd\": container with ID starting with afd03841b8f733b7ed95951668267d08caf2a799b4210665020fcf7a1845facd not found: ID does not exist" Jan 30 06:23:46 crc kubenswrapper[4931]: I0130 06:23:46.929739 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.005351 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe686ff0-2117-48c0-bde6-41faa75e59b7-operator-scripts\") pod \"fe686ff0-2117-48c0-bde6-41faa75e59b7\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.005510 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zn5\" (UniqueName: \"kubernetes.io/projected/fe686ff0-2117-48c0-bde6-41faa75e59b7-kube-api-access-l6zn5\") pod \"fe686ff0-2117-48c0-bde6-41faa75e59b7\" (UID: \"fe686ff0-2117-48c0-bde6-41faa75e59b7\") " Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.006795 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe686ff0-2117-48c0-bde6-41faa75e59b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe686ff0-2117-48c0-bde6-41faa75e59b7" (UID: "fe686ff0-2117-48c0-bde6-41faa75e59b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.011829 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe686ff0-2117-48c0-bde6-41faa75e59b7-kube-api-access-l6zn5" (OuterVolumeSpecName: "kube-api-access-l6zn5") pod "fe686ff0-2117-48c0-bde6-41faa75e59b7" (UID: "fe686ff0-2117-48c0-bde6-41faa75e59b7"). InnerVolumeSpecName "kube-api-access-l6zn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.107482 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zn5\" (UniqueName: \"kubernetes.io/projected/fe686ff0-2117-48c0-bde6-41faa75e59b7-kube-api-access-l6zn5\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.107859 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe686ff0-2117-48c0-bde6-41faa75e59b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.263002 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hfrzc" event={"ID":"fe686ff0-2117-48c0-bde6-41faa75e59b7","Type":"ContainerDied","Data":"2035b861086e20c1fb342c823c4eb82d86fc588a7b4c49c61d68ad62b4ba70ac"} Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.263070 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2035b861086e20c1fb342c823c4eb82d86fc588a7b4c49c61d68ad62b4ba70ac" Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.263159 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hfrzc" Jan 30 06:23:47 crc kubenswrapper[4931]: I0130 06:23:47.438553 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" path="/var/lib/kubelet/pods/a76cda73-9eb5-4a03-aa82-713af868b080/volumes" Jan 30 06:23:50 crc kubenswrapper[4931]: I0130 06:23:50.467849 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hfrzc"] Jan 30 06:23:50 crc kubenswrapper[4931]: I0130 06:23:50.474756 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hfrzc"] Jan 30 06:23:51 crc kubenswrapper[4931]: I0130 06:23:51.436377 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe686ff0-2117-48c0-bde6-41faa75e59b7" path="/var/lib/kubelet/pods/fe686ff0-2117-48c0-bde6-41faa75e59b7/volumes" Jan 30 06:23:52 crc kubenswrapper[4931]: I0130 06:23:52.422720 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:23:52 crc kubenswrapper[4931]: E0130 06:23:52.423162 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.485007 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4x52j"] Jan 30 06:23:55 crc kubenswrapper[4931]: E0130 06:23:55.485944 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="registry-server" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.485966 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="registry-server" Jan 30 06:23:55 crc kubenswrapper[4931]: E0130 06:23:55.485989 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="extract-content" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.486002 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="extract-content" Jan 30 06:23:55 crc kubenswrapper[4931]: E0130 06:23:55.486022 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe686ff0-2117-48c0-bde6-41faa75e59b7" containerName="mariadb-account-create-update" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.486036 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe686ff0-2117-48c0-bde6-41faa75e59b7" containerName="mariadb-account-create-update" Jan 30 06:23:55 crc kubenswrapper[4931]: E0130 06:23:55.486063 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="extract-utilities" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.486075 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="extract-utilities" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.486341 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76cda73-9eb5-4a03-aa82-713af868b080" containerName="registry-server" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.486368 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe686ff0-2117-48c0-bde6-41faa75e59b7" containerName="mariadb-account-create-update" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.489083 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.491965 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.495754 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4x52j"] Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.574274 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnmx\" (UniqueName: \"kubernetes.io/projected/56513a2a-14aa-4055-8b35-de5c272faab9-kube-api-access-bmnmx\") pod \"root-account-create-update-4x52j\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.574411 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56513a2a-14aa-4055-8b35-de5c272faab9-operator-scripts\") pod \"root-account-create-update-4x52j\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.677011 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnmx\" (UniqueName: \"kubernetes.io/projected/56513a2a-14aa-4055-8b35-de5c272faab9-kube-api-access-bmnmx\") pod \"root-account-create-update-4x52j\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.677124 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56513a2a-14aa-4055-8b35-de5c272faab9-operator-scripts\") pod \"root-account-create-update-4x52j\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.678514 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56513a2a-14aa-4055-8b35-de5c272faab9-operator-scripts\") pod \"root-account-create-update-4x52j\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.711398 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnmx\" (UniqueName: \"kubernetes.io/projected/56513a2a-14aa-4055-8b35-de5c272faab9-kube-api-access-bmnmx\") pod \"root-account-create-update-4x52j\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:55 crc kubenswrapper[4931]: I0130 06:23:55.819832 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:56 crc kubenswrapper[4931]: I0130 06:23:56.118176 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4x52j"] Jan 30 06:23:56 crc kubenswrapper[4931]: I0130 06:23:56.353945 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x52j" event={"ID":"56513a2a-14aa-4055-8b35-de5c272faab9","Type":"ContainerStarted","Data":"88feac8d2b5d033c066731d3fa1d66cc34b935b576cc56423d76770840e869ad"} Jan 30 06:23:56 crc kubenswrapper[4931]: I0130 06:23:56.354017 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x52j" event={"ID":"56513a2a-14aa-4055-8b35-de5c272faab9","Type":"ContainerStarted","Data":"39f25fd44e0bc97c99560df022df6c67ee698d6e72a5c8a01753ba3f5a6baf85"} Jan 30 06:23:56 crc kubenswrapper[4931]: I0130 06:23:56.390009 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4x52j" podStartSLOduration=1.389973283 podStartE2EDuration="1.389973283s" podCreationTimestamp="2026-01-30 06:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:23:56.381588288 +0000 UTC m=+4571.751498575" watchObservedRunningTime="2026-01-30 06:23:56.389973283 +0000 UTC m=+4571.759883580" Jan 30 06:23:57 crc kubenswrapper[4931]: I0130 06:23:57.365780 4931 generic.go:334] "Generic (PLEG): container finished" podID="56513a2a-14aa-4055-8b35-de5c272faab9" containerID="88feac8d2b5d033c066731d3fa1d66cc34b935b576cc56423d76770840e869ad" exitCode=0 Jan 30 06:23:57 crc kubenswrapper[4931]: I0130 06:23:57.365837 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x52j" event={"ID":"56513a2a-14aa-4055-8b35-de5c272faab9","Type":"ContainerDied","Data":"88feac8d2b5d033c066731d3fa1d66cc34b935b576cc56423d76770840e869ad"} Jan 30 06:23:58 crc kubenswrapper[4931]: I0130 06:23:58.761450 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:58 crc kubenswrapper[4931]: I0130 06:23:58.840758 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnmx\" (UniqueName: \"kubernetes.io/projected/56513a2a-14aa-4055-8b35-de5c272faab9-kube-api-access-bmnmx\") pod \"56513a2a-14aa-4055-8b35-de5c272faab9\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " Jan 30 06:23:58 crc kubenswrapper[4931]: I0130 06:23:58.840850 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56513a2a-14aa-4055-8b35-de5c272faab9-operator-scripts\") pod \"56513a2a-14aa-4055-8b35-de5c272faab9\" (UID: \"56513a2a-14aa-4055-8b35-de5c272faab9\") " Jan 30 06:23:58 crc kubenswrapper[4931]: I0130 06:23:58.841382 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56513a2a-14aa-4055-8b35-de5c272faab9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56513a2a-14aa-4055-8b35-de5c272faab9" (UID: "56513a2a-14aa-4055-8b35-de5c272faab9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:23:58 crc kubenswrapper[4931]: I0130 06:23:58.849468 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56513a2a-14aa-4055-8b35-de5c272faab9-kube-api-access-bmnmx" (OuterVolumeSpecName: "kube-api-access-bmnmx") pod "56513a2a-14aa-4055-8b35-de5c272faab9" (UID: "56513a2a-14aa-4055-8b35-de5c272faab9"). InnerVolumeSpecName "kube-api-access-bmnmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:23:58 crc kubenswrapper[4931]: I0130 06:23:58.942721 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56513a2a-14aa-4055-8b35-de5c272faab9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:58 crc kubenswrapper[4931]: I0130 06:23:58.942762 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmnmx\" (UniqueName: \"kubernetes.io/projected/56513a2a-14aa-4055-8b35-de5c272faab9-kube-api-access-bmnmx\") on node \"crc\" DevicePath \"\"" Jan 30 06:23:59 crc kubenswrapper[4931]: I0130 06:23:59.388006 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4x52j" Jan 30 06:23:59 crc kubenswrapper[4931]: I0130 06:23:59.387999 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4x52j" event={"ID":"56513a2a-14aa-4055-8b35-de5c272faab9","Type":"ContainerDied","Data":"39f25fd44e0bc97c99560df022df6c67ee698d6e72a5c8a01753ba3f5a6baf85"} Jan 30 06:23:59 crc kubenswrapper[4931]: I0130 06:23:59.388639 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39f25fd44e0bc97c99560df022df6c67ee698d6e72a5c8a01753ba3f5a6baf85" Jan 30 06:23:59 crc kubenswrapper[4931]: I0130 06:23:59.390581 4931 generic.go:334] "Generic (PLEG): container finished" podID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerID="1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee" exitCode=0 Jan 30 06:23:59 crc kubenswrapper[4931]: I0130 06:23:59.390642 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ffda181-212b-42f4-bd56-9ab2864ded3c","Type":"ContainerDied","Data":"1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee"} Jan 30 06:23:59 crc kubenswrapper[4931]: I0130 06:23:59.393974 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerID="ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f" exitCode=0 Jan 30 06:23:59 crc kubenswrapper[4931]: I0130 06:23:59.394020 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3e54229-729f-4bfc-a208-dc39edc35b8a","Type":"ContainerDied","Data":"ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f"} Jan 30 06:24:00 crc kubenswrapper[4931]: I0130 06:24:00.403588 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ffda181-212b-42f4-bd56-9ab2864ded3c","Type":"ContainerStarted","Data":"a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523"} Jan 30 06:24:00 crc kubenswrapper[4931]: I0130 06:24:00.404668 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 06:24:00 crc kubenswrapper[4931]: I0130 06:24:00.405658 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3e54229-729f-4bfc-a208-dc39edc35b8a","Type":"ContainerStarted","Data":"2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf"} Jan 30 06:24:00 crc kubenswrapper[4931]: I0130 06:24:00.406341 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:00 crc kubenswrapper[4931]: I0130 06:24:00.427827 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.427811807 podStartE2EDuration="38.427811807s" podCreationTimestamp="2026-01-30 06:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:24:00.425584585 +0000 UTC m=+4575.795494872" watchObservedRunningTime="2026-01-30 06:24:00.427811807 +0000 UTC m=+4575.797722064" Jan 30 06:24:00 crc kubenswrapper[4931]: I0130 06:24:00.453903 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.453886376 podStartE2EDuration="38.453886376s" podCreationTimestamp="2026-01-30 06:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:24:00.451071337 +0000 UTC m=+4575.820981684" watchObservedRunningTime="2026-01-30 06:24:00.453886376 +0000 UTC m=+4575.823796633" Jan 30 06:24:05 crc kubenswrapper[4931]: I0130 06:24:05.426179 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:24:06 crc kubenswrapper[4931]: I0130 06:24:06.456244 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"2c01137c96244e8746673f3822245d96905f563a3e4aa39b2a8d7db22e60ff5b"} Jan 30 06:24:14 crc kubenswrapper[4931]: I0130 06:24:14.035773 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:14 crc kubenswrapper[4931]: I0130 06:24:14.109055 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.204659 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8dc7b"] Jan 30 06:24:20 crc kubenswrapper[4931]: E0130 06:24:20.205601 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56513a2a-14aa-4055-8b35-de5c272faab9" containerName="mariadb-account-create-update" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.205621 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="56513a2a-14aa-4055-8b35-de5c272faab9" containerName="mariadb-account-create-update" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.205811 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="56513a2a-14aa-4055-8b35-de5c272faab9" containerName="mariadb-account-create-update" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.206706 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.224656 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8dc7b"] Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.324704 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-dns-svc\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.324787 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-config\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.324949 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68qr\" (UniqueName: \"kubernetes.io/projected/df16978b-d22c-4dd1-87d8-330cf82a859d-kube-api-access-r68qr\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.426691 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-config\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.426760 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r68qr\" (UniqueName: \"kubernetes.io/projected/df16978b-d22c-4dd1-87d8-330cf82a859d-kube-api-access-r68qr\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.426857 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-dns-svc\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.427817 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-config\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.427911 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-dns-svc\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.460129 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r68qr\" (UniqueName: \"kubernetes.io/projected/df16978b-d22c-4dd1-87d8-330cf82a859d-kube-api-access-r68qr\") pod \"dnsmasq-dns-699964fbc-8dc7b\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:20 crc kubenswrapper[4931]: I0130 06:24:20.554502 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:21 crc kubenswrapper[4931]: I0130 06:24:21.045703 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:24:21 crc kubenswrapper[4931]: W0130 06:24:21.113621 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf16978b_d22c_4dd1_87d8_330cf82a859d.slice/crio-e39274ddf419bf92df1598af572cac14fef4ddf978c753b8f221ebec54b897c7 WatchSource:0}: Error finding container e39274ddf419bf92df1598af572cac14fef4ddf978c753b8f221ebec54b897c7: Status 404 returned error can't find the container with id e39274ddf419bf92df1598af572cac14fef4ddf978c753b8f221ebec54b897c7 Jan 30 06:24:21 crc kubenswrapper[4931]: I0130 06:24:21.117403 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8dc7b"] Jan 30 06:24:21 crc kubenswrapper[4931]: I0130 06:24:21.630970 4931 generic.go:334] "Generic (PLEG): container finished" podID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerID="a56ecc0c98ffc762965d58506b3e81c6c6637f6a00e16f27ab2be355f3d037e0" exitCode=0 Jan 30 06:24:21 crc kubenswrapper[4931]: I0130 06:24:21.631016 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" event={"ID":"df16978b-d22c-4dd1-87d8-330cf82a859d","Type":"ContainerDied","Data":"a56ecc0c98ffc762965d58506b3e81c6c6637f6a00e16f27ab2be355f3d037e0"} Jan 30 06:24:21 crc kubenswrapper[4931]: I0130 06:24:21.631312 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" event={"ID":"df16978b-d22c-4dd1-87d8-330cf82a859d","Type":"ContainerStarted","Data":"e39274ddf419bf92df1598af572cac14fef4ddf978c753b8f221ebec54b897c7"} Jan 30 06:24:21 crc kubenswrapper[4931]: I0130 06:24:21.722157 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:24:22 crc kubenswrapper[4931]: I0130 06:24:22.640398 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" event={"ID":"df16978b-d22c-4dd1-87d8-330cf82a859d","Type":"ContainerStarted","Data":"c38a0669baf43cbcb3f248e3fbd7e19a5da64c872efcf71b2a52e9bac3e9cedd"} Jan 30 06:24:22 crc kubenswrapper[4931]: I0130 06:24:22.641067 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:22 crc kubenswrapper[4931]: I0130 06:24:22.664442 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" podStartSLOduration=2.664412048 podStartE2EDuration="2.664412048s" podCreationTimestamp="2026-01-30 06:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:24:22.662569067 +0000 UTC m=+4598.032479364" watchObservedRunningTime="2026-01-30 06:24:22.664412048 +0000 UTC m=+4598.034322305" Jan 30 06:24:22 crc kubenswrapper[4931]: I0130 06:24:22.886046 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerName="rabbitmq" containerID="cri-o://a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523" gracePeriod=604799 Jan 30 06:24:23 crc kubenswrapper[4931]: I0130 06:24:23.517702 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerName="rabbitmq" containerID="cri-o://2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf" gracePeriod=604799 Jan 30 06:24:24 crc kubenswrapper[4931]: I0130 06:24:24.031494 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.249:5672: connect: connection refused" Jan 30 06:24:24 crc kubenswrapper[4931]: I0130 06:24:24.106110 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.248:5672: connect: connection refused" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.499127 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.622654 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-erlang-cookie\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.622699 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-confd\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.622740 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ngdz\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-kube-api-access-2ngdz\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.622784 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ffda181-212b-42f4-bd56-9ab2864ded3c-pod-info\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.622829 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-server-conf\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.623042 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.623096 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-plugins-conf\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.623137 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ffda181-212b-42f4-bd56-9ab2864ded3c-erlang-cookie-secret\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.623207 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-plugins\") pod \"2ffda181-212b-42f4-bd56-9ab2864ded3c\" (UID: \"2ffda181-212b-42f4-bd56-9ab2864ded3c\") " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.623670 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.623920 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.623933 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.624020 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.627954 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2ffda181-212b-42f4-bd56-9ab2864ded3c-pod-info" (OuterVolumeSpecName: "pod-info") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.629272 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ffda181-212b-42f4-bd56-9ab2864ded3c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.635486 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485" (OuterVolumeSpecName: "persistence") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.647962 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-kube-api-access-2ngdz" (OuterVolumeSpecName: "kube-api-access-2ngdz") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "kube-api-access-2ngdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.659077 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-server-conf" (OuterVolumeSpecName: "server-conf") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.719893 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2ffda181-212b-42f4-bd56-9ab2864ded3c" (UID: "2ffda181-212b-42f4-bd56-9ab2864ded3c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.724028 4931 generic.go:334] "Generic (PLEG): container finished" podID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerID="a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523" exitCode=0 Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.724074 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ffda181-212b-42f4-bd56-9ab2864ded3c","Type":"ContainerDied","Data":"a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523"} Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.724106 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2ffda181-212b-42f4-bd56-9ab2864ded3c","Type":"ContainerDied","Data":"d1546caee0e50a6ac2e387f8e39e4451c82f903d75d2682ba4430ed473fac38f"} Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.724126 4931 scope.go:117] "RemoveContainer" containerID="a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.724234 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.736915 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.736947 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ngdz\" (UniqueName: \"kubernetes.io/projected/2ffda181-212b-42f4-bd56-9ab2864ded3c-kube-api-access-2ngdz\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.736958 4931 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2ffda181-212b-42f4-bd56-9ab2864ded3c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.736967 4931 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.737002 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") on node \"crc\" " Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.737014 4931 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2ffda181-212b-42f4-bd56-9ab2864ded3c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.737026 4931 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2ffda181-212b-42f4-bd56-9ab2864ded3c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.737037 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2ffda181-212b-42f4-bd56-9ab2864ded3c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.753046 4931 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.753175 4931 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485") on node "crc" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.762515 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.766737 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.788864 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:24:29 crc kubenswrapper[4931]: E0130 06:24:29.789241 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerName="setup-container" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.789256 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerName="setup-container" Jan 30 06:24:29 crc kubenswrapper[4931]: E0130 06:24:29.789290 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerName="rabbitmq" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.789298 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerName="rabbitmq" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.789518 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" containerName="rabbitmq" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.790434 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.794048 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.794088 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.794095 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mgh9s" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.794382 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.794533 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.795888 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.839796 4931 reconciler_common.go:293] "Volume detached for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.876489 4931 scope.go:117] "RemoveContainer" containerID="1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.897723 4931 scope.go:117] "RemoveContainer" containerID="a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523" Jan 30 06:24:29 crc kubenswrapper[4931]: E0130 06:24:29.898320 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523\": container with ID starting with a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523 not found: ID does not exist" containerID="a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.898379 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523"} err="failed to get container status \"a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523\": rpc error: code = NotFound desc = could not find container \"a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523\": container with ID starting with a149fd78b00119365a78cb917558b9ad6509169c4733a14475d7be14b2c5b523 not found: ID does not exist" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.898409 4931 scope.go:117] "RemoveContainer" containerID="1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee" Jan 30 06:24:29 crc kubenswrapper[4931]: E0130 06:24:29.898860 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee\": container with ID starting with 1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee not found: ID does not exist" containerID="1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.898903 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee"} err="failed to get container status \"1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee\": rpc error: code = NotFound desc = could not find container \"1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee\": container with ID starting with 1ecf05555efd310d268e093a1d1f4f0d2289ee41c1d84fd8cf157dc399ec6bee not found: ID does not exist" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941290 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941365 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941468 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8dabfefe-4927-44d0-b370-f7e28f2a4f57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8dabfefe-4927-44d0-b370-f7e28f2a4f57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941633 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8dabfefe-4927-44d0-b370-f7e28f2a4f57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941680 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941715 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27h4b\" (UniqueName: \"kubernetes.io/projected/8dabfefe-4927-44d0-b370-f7e28f2a4f57-kube-api-access-27h4b\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941763 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:29 crc kubenswrapper[4931]: I0130 06:24:29.941804 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8dabfefe-4927-44d0-b370-f7e28f2a4f57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.043462 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8dabfefe-4927-44d0-b370-f7e28f2a4f57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.043897 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8dabfefe-4927-44d0-b370-f7e28f2a4f57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.043933 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8dabfefe-4927-44d0-b370-f7e28f2a4f57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.043984 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.044051 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27h4b\" (UniqueName: \"kubernetes.io/projected/8dabfefe-4927-44d0-b370-f7e28f2a4f57-kube-api-access-27h4b\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.044101 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.044156 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8dabfefe-4927-44d0-b370-f7e28f2a4f57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.044261 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.044304 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.045248 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.045256 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.047580 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8dabfefe-4927-44d0-b370-f7e28f2a4f57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.048468 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8dabfefe-4927-44d0-b370-f7e28f2a4f57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.049585 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.049679 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/96f6795482cf208a0436fcedd4e13f5ef58c9a3e2d9d6166beea188ab34f9e81/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.052607 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8dabfefe-4927-44d0-b370-f7e28f2a4f57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.057259 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8dabfefe-4927-44d0-b370-f7e28f2a4f57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.057772 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8dabfefe-4927-44d0-b370-f7e28f2a4f57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.073511 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27h4b\" (UniqueName: \"kubernetes.io/projected/8dabfefe-4927-44d0-b370-f7e28f2a4f57-kube-api-access-27h4b\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.089811 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57c51230-7d07-4cc2-b5d8-2ac82e509485\") pod \"rabbitmq-server-0\" (UID: \"8dabfefe-4927-44d0-b370-f7e28f2a4f57\") " pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.151175 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.153413 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.247862 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-plugins\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.247957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3e54229-729f-4bfc-a208-dc39edc35b8a-erlang-cookie-secret\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.247994 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-confd\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248031 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3e54229-729f-4bfc-a208-dc39edc35b8a-pod-info\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248075 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-server-conf\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248102 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-plugins-conf\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248139 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-erlang-cookie\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248324 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248343 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bknnm\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-kube-api-access-bknnm\") pod \"c3e54229-729f-4bfc-a208-dc39edc35b8a\" (UID: \"c3e54229-729f-4bfc-a208-dc39edc35b8a\") " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248537 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.248716 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.249448 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.249877 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.254175 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c3e54229-729f-4bfc-a208-dc39edc35b8a-pod-info" (OuterVolumeSpecName: "pod-info") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.257833 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e54229-729f-4bfc-a208-dc39edc35b8a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.269651 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-kube-api-access-bknnm" (OuterVolumeSpecName: "kube-api-access-bknnm") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "kube-api-access-bknnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.273364 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-server-conf" (OuterVolumeSpecName: "server-conf") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.278086 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d" (OuterVolumeSpecName: "persistence") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "pvc-73b1cc26-6baa-43d9-842a-e2612558a78d". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.347725 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c3e54229-729f-4bfc-a208-dc39edc35b8a" (UID: "c3e54229-729f-4bfc-a208-dc39edc35b8a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350624 4931 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3e54229-729f-4bfc-a208-dc39edc35b8a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350676 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350692 4931 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3e54229-729f-4bfc-a208-dc39edc35b8a-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350703 4931 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350715 4931 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3e54229-729f-4bfc-a208-dc39edc35b8a-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350728 4931 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3e54229-729f-4bfc-a208-dc39edc35b8a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350787 4931 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") on node \"crc\" " Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.350803 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bknnm\" (UniqueName: \"kubernetes.io/projected/c3e54229-729f-4bfc-a208-dc39edc35b8a-kube-api-access-bknnm\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.375608 4931 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.376090 4931 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-73b1cc26-6baa-43d9-842a-e2612558a78d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d") on node "crc" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.452638 4931 reconciler_common.go:293] "Volume detached for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.556729 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.626087 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-7dq6z"] Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.626464 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" podUID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerName="dnsmasq-dns" containerID="cri-o://8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46" gracePeriod=10 Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.692249 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 06:24:30 crc kubenswrapper[4931]: W0130 06:24:30.727930 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dabfefe_4927_44d0_b370_f7e28f2a4f57.slice/crio-26192b5a98ae39a9cf72b0bd822a0e13879c7330012c002611b214907a438036 WatchSource:0}: Error finding container 26192b5a98ae39a9cf72b0bd822a0e13879c7330012c002611b214907a438036: Status 404 returned error can't find the container with id 26192b5a98ae39a9cf72b0bd822a0e13879c7330012c002611b214907a438036 Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.741969 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerID="2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf" exitCode=0 Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.742008 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3e54229-729f-4bfc-a208-dc39edc35b8a","Type":"ContainerDied","Data":"2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf"} Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.742030 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c3e54229-729f-4bfc-a208-dc39edc35b8a","Type":"ContainerDied","Data":"8795a34898e6841b11366ed2b6c1d443c46d48e62ae64548732f8a0d5ccd026d"} Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.742046 4931 scope.go:117] "RemoveContainer" containerID="2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.742186 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.799092 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.805330 4931 scope.go:117] "RemoveContainer" containerID="ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.819472 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.827294 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:24:30 crc kubenswrapper[4931]: E0130 06:24:30.827710 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerName="rabbitmq" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.827724 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerName="rabbitmq" Jan 30 06:24:30 crc kubenswrapper[4931]: E0130 06:24:30.827744 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerName="setup-container" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.827751 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerName="setup-container" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.827891 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" containerName="rabbitmq" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.831047 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.834080 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.834247 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.834413 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.834647 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wgz6z" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.834856 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.837237 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.844707 4931 scope.go:117] "RemoveContainer" containerID="2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf" Jan 30 06:24:30 crc kubenswrapper[4931]: E0130 06:24:30.845237 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf\": container with ID starting with 2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf not found: ID does not exist" containerID="2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.845285 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf"} err="failed to get container status \"2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf\": rpc error: code = NotFound desc = could not find container \"2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf\": container with ID starting with 2265dcbb515242dbc1813f20af18edf4d5c884fae8b8b34b8e14ce4d80da9cbf not found: ID does not exist" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.845310 4931 scope.go:117] "RemoveContainer" containerID="ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f" Jan 30 06:24:30 crc kubenswrapper[4931]: E0130 06:24:30.845572 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f\": container with ID starting with ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f not found: ID does not exist" containerID="ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.845594 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f"} err="failed to get container status \"ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f\": rpc error: code = NotFound desc = could not find container \"ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f\": container with ID starting with ec04384c6d230b5ff8300554f9d2de9fa198aca174b998590f18c0a77858c43f not found: ID does not exist" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966491 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966552 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966573 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966607 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966634 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966669 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthw6\" (UniqueName: \"kubernetes.io/projected/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-kube-api-access-gthw6\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966707 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966723 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:30 crc kubenswrapper[4931]: I0130 06:24:30.966739 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067484 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067780 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067802 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067830 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067857 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067875 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthw6\" (UniqueName: \"kubernetes.io/projected/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-kube-api-access-gthw6\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067898 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067916 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.067930 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.068483 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.069643 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.071735 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.072886 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.074029 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.074799 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.076763 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.079052 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.079111 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/081bf8f84c60dfb918ea0eb5418be09e59105cf3295dde894d1b133731bc6391/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.089814 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthw6\" (UniqueName: \"kubernetes.io/projected/ea4042a1-4ebc-4b11-a7e4-e695a668aa81-kube-api-access-gthw6\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.108610 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.113898 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73b1cc26-6baa-43d9-842a-e2612558a78d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ea4042a1-4ebc-4b11-a7e4-e695a668aa81\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.223361 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.271024 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l2x7\" (UniqueName: \"kubernetes.io/projected/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-kube-api-access-5l2x7\") pod \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.271227 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-config\") pod \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.271283 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-dns-svc\") pod \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\" (UID: \"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c\") " Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.276786 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-kube-api-access-5l2x7" (OuterVolumeSpecName: "kube-api-access-5l2x7") pod "a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" (UID: "a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c"). InnerVolumeSpecName "kube-api-access-5l2x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.321115 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" (UID: "a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.324231 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-config" (OuterVolumeSpecName: "config") pod "a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" (UID: "a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.376849 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l2x7\" (UniqueName: \"kubernetes.io/projected/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-kube-api-access-5l2x7\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.377113 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.377123 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.431628 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffda181-212b-42f4-bd56-9ab2864ded3c" path="/var/lib/kubelet/pods/2ffda181-212b-42f4-bd56-9ab2864ded3c/volumes" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.432793 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3e54229-729f-4bfc-a208-dc39edc35b8a" path="/var/lib/kubelet/pods/c3e54229-729f-4bfc-a208-dc39edc35b8a/volumes" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.697067 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 06:24:31 crc kubenswrapper[4931]: W0130 06:24:31.709357 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea4042a1_4ebc_4b11_a7e4_e695a668aa81.slice/crio-258fba18620fdf53d39a231cccf314fbf000ec50000e662c1ea9ce4f5aa04430 WatchSource:0}: Error finding container 258fba18620fdf53d39a231cccf314fbf000ec50000e662c1ea9ce4f5aa04430: Status 404 returned error can't find the container with id 258fba18620fdf53d39a231cccf314fbf000ec50000e662c1ea9ce4f5aa04430 Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.755386 4931 generic.go:334] "Generic (PLEG): container finished" podID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerID="8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46" exitCode=0 Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.755564 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" event={"ID":"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c","Type":"ContainerDied","Data":"8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46"} Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.755655 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" event={"ID":"a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c","Type":"ContainerDied","Data":"df23f9bc8284d347a4dda805a55b9a64a2739f707161ba449d8a6399ee4d3665"} Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.755650 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-7dq6z" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.755689 4931 scope.go:117] "RemoveContainer" containerID="8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.760780 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ea4042a1-4ebc-4b11-a7e4-e695a668aa81","Type":"ContainerStarted","Data":"258fba18620fdf53d39a231cccf314fbf000ec50000e662c1ea9ce4f5aa04430"} Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.767198 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8dabfefe-4927-44d0-b370-f7e28f2a4f57","Type":"ContainerStarted","Data":"26192b5a98ae39a9cf72b0bd822a0e13879c7330012c002611b214907a438036"} Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.888275 4931 scope.go:117] "RemoveContainer" containerID="85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7" Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.923161 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-7dq6z"] Jan 30 06:24:31 crc kubenswrapper[4931]: I0130 06:24:31.930564 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-7dq6z"] Jan 30 06:24:32 crc kubenswrapper[4931]: I0130 06:24:32.021334 4931 scope.go:117] "RemoveContainer" containerID="8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46" Jan 30 06:24:32 crc kubenswrapper[4931]: E0130 06:24:32.021965 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46\": container with ID starting with 8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46 not found: ID does not exist" containerID="8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46" Jan 30 06:24:32 crc kubenswrapper[4931]: I0130 06:24:32.022022 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46"} err="failed to get container status \"8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46\": rpc error: code = NotFound desc = could not find container \"8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46\": container with ID starting with 8077550b8b238461a8681b1d15ec72e4c6bfd9ffea2a3677dfce9da31986db46 not found: ID does not exist" Jan 30 06:24:32 crc kubenswrapper[4931]: I0130 06:24:32.022060 4931 scope.go:117] "RemoveContainer" containerID="85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7" Jan 30 06:24:32 crc kubenswrapper[4931]: E0130 06:24:32.022679 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7\": container with ID starting with 85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7 not found: ID does not exist" containerID="85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7" Jan 30 06:24:32 crc kubenswrapper[4931]: I0130 06:24:32.022722 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7"} err="failed to get container status \"85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7\": rpc error: code = NotFound desc = could not find container \"85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7\": container with ID starting with 85ba0da9a4253ef8d549cbbe8037a7a7fd65be4b07650d2f13e5de75d2006dc7 not found: ID does not exist" Jan 30 06:24:32 crc kubenswrapper[4931]: I0130 06:24:32.779006 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8dabfefe-4927-44d0-b370-f7e28f2a4f57","Type":"ContainerStarted","Data":"4a7054ec29eebc4e0ea1decb7fd718f56644883114ed130ad148289acb6131f6"} Jan 30 06:24:33 crc kubenswrapper[4931]: I0130 06:24:33.437057 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" path="/var/lib/kubelet/pods/a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c/volumes" Jan 30 06:24:33 crc kubenswrapper[4931]: I0130 06:24:33.795316 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ea4042a1-4ebc-4b11-a7e4-e695a668aa81","Type":"ContainerStarted","Data":"e9b635eb7fff471125ec7e17c4a2d16fe738c9f4ff34ea8d446cfc2643643db8"} Jan 30 06:25:06 crc kubenswrapper[4931]: I0130 06:25:06.127254 4931 generic.go:334] "Generic (PLEG): container finished" podID="8dabfefe-4927-44d0-b370-f7e28f2a4f57" containerID="4a7054ec29eebc4e0ea1decb7fd718f56644883114ed130ad148289acb6131f6" exitCode=0 Jan 30 06:25:06 crc kubenswrapper[4931]: I0130 06:25:06.127324 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8dabfefe-4927-44d0-b370-f7e28f2a4f57","Type":"ContainerDied","Data":"4a7054ec29eebc4e0ea1decb7fd718f56644883114ed130ad148289acb6131f6"} Jan 30 06:25:07 crc kubenswrapper[4931]: I0130 06:25:07.177289 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8dabfefe-4927-44d0-b370-f7e28f2a4f57","Type":"ContainerStarted","Data":"b4e923c38a6c09f553fc519448bfaa59f47f84f3474d9f91b2de09906bc96c20"} Jan 30 06:25:07 crc kubenswrapper[4931]: I0130 06:25:07.177801 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 06:25:07 crc kubenswrapper[4931]: I0130 06:25:07.208305 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.208290095 podStartE2EDuration="38.208290095s" podCreationTimestamp="2026-01-30 06:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:25:07.207117892 +0000 UTC m=+4642.577028189" watchObservedRunningTime="2026-01-30 06:25:07.208290095 +0000 UTC m=+4642.578200352" Jan 30 06:25:07 crc kubenswrapper[4931]: I0130 06:25:07.210762 4931 generic.go:334] "Generic (PLEG): container finished" podID="ea4042a1-4ebc-4b11-a7e4-e695a668aa81" containerID="e9b635eb7fff471125ec7e17c4a2d16fe738c9f4ff34ea8d446cfc2643643db8" exitCode=0 Jan 30 06:25:07 crc kubenswrapper[4931]: I0130 06:25:07.210827 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ea4042a1-4ebc-4b11-a7e4-e695a668aa81","Type":"ContainerDied","Data":"e9b635eb7fff471125ec7e17c4a2d16fe738c9f4ff34ea8d446cfc2643643db8"} Jan 30 06:25:08 crc kubenswrapper[4931]: I0130 06:25:08.219845 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ea4042a1-4ebc-4b11-a7e4-e695a668aa81","Type":"ContainerStarted","Data":"7b9bea16453033d6720d410c61025a96cabc35996588e21e535a5bf5d370b443"} Jan 30 06:25:08 crc kubenswrapper[4931]: I0130 06:25:08.220286 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:25:08 crc kubenswrapper[4931]: I0130 06:25:08.249137 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.249112441 podStartE2EDuration="38.249112441s" podCreationTimestamp="2026-01-30 06:24:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:25:08.246373524 +0000 UTC m=+4643.616283791" watchObservedRunningTime="2026-01-30 06:25:08.249112441 +0000 UTC m=+4643.619022718" Jan 30 06:25:20 crc kubenswrapper[4931]: I0130 06:25:20.155787 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 06:25:21 crc kubenswrapper[4931]: I0130 06:25:21.232740 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 06:25:27 crc kubenswrapper[4931]: I0130 06:25:27.990380 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 06:25:27 crc kubenswrapper[4931]: E0130 06:25:27.991627 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerName="dnsmasq-dns" Jan 30 06:25:27 crc kubenswrapper[4931]: I0130 06:25:27.991646 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerName="dnsmasq-dns" Jan 30 06:25:27 crc kubenswrapper[4931]: E0130 06:25:27.991660 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerName="init" Jan 30 06:25:27 crc kubenswrapper[4931]: I0130 06:25:27.991668 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerName="init" Jan 30 06:25:27 crc kubenswrapper[4931]: I0130 06:25:27.991846 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43ae7a0-f3ca-4eb9-ae23-e6dc8f4e5a1c" containerName="dnsmasq-dns" Jan 30 06:25:27 crc kubenswrapper[4931]: I0130 06:25:27.992913 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:25:27 crc kubenswrapper[4931]: I0130 06:25:27.996165 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qqbkw" Jan 30 06:25:28 crc kubenswrapper[4931]: I0130 06:25:28.000570 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:25:28 crc kubenswrapper[4931]: I0130 06:25:28.121482 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dx8c\" (UniqueName: \"kubernetes.io/projected/01ffdc71-90ec-42da-ae77-d65caba67d94-kube-api-access-4dx8c\") pod \"mariadb-client\" (UID: \"01ffdc71-90ec-42da-ae77-d65caba67d94\") " pod="openstack/mariadb-client" Jan 30 06:25:28 crc kubenswrapper[4931]: I0130 06:25:28.223535 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dx8c\" (UniqueName: \"kubernetes.io/projected/01ffdc71-90ec-42da-ae77-d65caba67d94-kube-api-access-4dx8c\") pod \"mariadb-client\" (UID: \"01ffdc71-90ec-42da-ae77-d65caba67d94\") " pod="openstack/mariadb-client" Jan 30 06:25:28 crc kubenswrapper[4931]: I0130 06:25:28.262716 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dx8c\" (UniqueName: \"kubernetes.io/projected/01ffdc71-90ec-42da-ae77-d65caba67d94-kube-api-access-4dx8c\") pod \"mariadb-client\" (UID: \"01ffdc71-90ec-42da-ae77-d65caba67d94\") " pod="openstack/mariadb-client" Jan 30 06:25:28 crc kubenswrapper[4931]: I0130 06:25:28.327003 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:25:28 crc kubenswrapper[4931]: I0130 06:25:28.877531 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:25:28 crc kubenswrapper[4931]: W0130 06:25:28.881063 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01ffdc71_90ec_42da_ae77_d65caba67d94.slice/crio-47b19816a25b7b5565b5d3a98e3c040159fe14ce20a317a79602e112c5e860fb WatchSource:0}: Error finding container 47b19816a25b7b5565b5d3a98e3c040159fe14ce20a317a79602e112c5e860fb: Status 404 returned error can't find the container with id 47b19816a25b7b5565b5d3a98e3c040159fe14ce20a317a79602e112c5e860fb Jan 30 06:25:28 crc kubenswrapper[4931]: I0130 06:25:28.884332 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:25:29 crc kubenswrapper[4931]: I0130 06:25:29.435304 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"01ffdc71-90ec-42da-ae77-d65caba67d94","Type":"ContainerStarted","Data":"47b19816a25b7b5565b5d3a98e3c040159fe14ce20a317a79602e112c5e860fb"} Jan 30 06:25:30 crc kubenswrapper[4931]: I0130 06:25:30.449908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"01ffdc71-90ec-42da-ae77-d65caba67d94","Type":"ContainerStarted","Data":"076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175"} Jan 30 06:25:30 crc kubenswrapper[4931]: I0130 06:25:30.471599 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.574171144 podStartE2EDuration="3.471572609s" podCreationTimestamp="2026-01-30 06:25:27 +0000 UTC" firstStartedPulling="2026-01-30 06:25:28.883948373 +0000 UTC m=+4664.253858670" lastFinishedPulling="2026-01-30 06:25:29.781349838 +0000 UTC m=+4665.151260135" observedRunningTime="2026-01-30 06:25:30.464545662 +0000 UTC m=+4665.834455959" watchObservedRunningTime="2026-01-30 06:25:30.471572609 +0000 UTC m=+4665.841482896" Jan 30 06:25:32 crc kubenswrapper[4931]: E0130 06:25:32.551459 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:49758->38.102.83.179:45103: write tcp 38.102.83.179:49758->38.102.83.179:45103: write: broken pipe Jan 30 06:25:43 crc kubenswrapper[4931]: I0130 06:25:43.912196 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:25:43 crc kubenswrapper[4931]: I0130 06:25:43.912751 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="01ffdc71-90ec-42da-ae77-d65caba67d94" containerName="mariadb-client" containerID="cri-o://076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175" gracePeriod=30 Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.476164 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.586967 4931 generic.go:334] "Generic (PLEG): container finished" podID="01ffdc71-90ec-42da-ae77-d65caba67d94" containerID="076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175" exitCode=143 Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.587033 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.587025 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"01ffdc71-90ec-42da-ae77-d65caba67d94","Type":"ContainerDied","Data":"076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175"} Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.587201 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"01ffdc71-90ec-42da-ae77-d65caba67d94","Type":"ContainerDied","Data":"47b19816a25b7b5565b5d3a98e3c040159fe14ce20a317a79602e112c5e860fb"} Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.587232 4931 scope.go:117] "RemoveContainer" containerID="076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175" Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.601377 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dx8c\" (UniqueName: \"kubernetes.io/projected/01ffdc71-90ec-42da-ae77-d65caba67d94-kube-api-access-4dx8c\") pod \"01ffdc71-90ec-42da-ae77-d65caba67d94\" (UID: \"01ffdc71-90ec-42da-ae77-d65caba67d94\") " Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.608896 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ffdc71-90ec-42da-ae77-d65caba67d94-kube-api-access-4dx8c" (OuterVolumeSpecName: "kube-api-access-4dx8c") pod "01ffdc71-90ec-42da-ae77-d65caba67d94" (UID: "01ffdc71-90ec-42da-ae77-d65caba67d94"). InnerVolumeSpecName "kube-api-access-4dx8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.615281 4931 scope.go:117] "RemoveContainer" containerID="076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175" Jan 30 06:25:44 crc kubenswrapper[4931]: E0130 06:25:44.615772 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175\": container with ID starting with 076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175 not found: ID does not exist" containerID="076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175" Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.615814 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175"} err="failed to get container status \"076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175\": rpc error: code = NotFound desc = could not find container \"076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175\": container with ID starting with 076ac035df6a30e346197c66aae9d9956205573072018033a2c8c55784c52175 not found: ID does not exist" Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.703198 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dx8c\" (UniqueName: \"kubernetes.io/projected/01ffdc71-90ec-42da-ae77-d65caba67d94-kube-api-access-4dx8c\") on node \"crc\" DevicePath \"\"" Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.924820 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:25:44 crc kubenswrapper[4931]: I0130 06:25:44.934443 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:25:45 crc kubenswrapper[4931]: I0130 06:25:45.439505 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ffdc71-90ec-42da-ae77-d65caba67d94" path="/var/lib/kubelet/pods/01ffdc71-90ec-42da-ae77-d65caba67d94/volumes" Jan 30 06:25:56 crc kubenswrapper[4931]: I0130 06:25:56.917491 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gp9rt"] Jan 30 06:25:56 crc kubenswrapper[4931]: E0130 06:25:56.918154 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ffdc71-90ec-42da-ae77-d65caba67d94" containerName="mariadb-client" Jan 30 06:25:56 crc kubenswrapper[4931]: I0130 06:25:56.918166 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ffdc71-90ec-42da-ae77-d65caba67d94" containerName="mariadb-client" Jan 30 06:25:56 crc kubenswrapper[4931]: I0130 06:25:56.918308 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ffdc71-90ec-42da-ae77-d65caba67d94" containerName="mariadb-client" Jan 30 06:25:56 crc kubenswrapper[4931]: I0130 06:25:56.919254 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:56 crc kubenswrapper[4931]: I0130 06:25:56.941262 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp9rt"] Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.104626 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-catalog-content\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.104712 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvw9m\" (UniqueName: \"kubernetes.io/projected/a8685ea7-9223-467f-aa16-c300e37458a6-kube-api-access-xvw9m\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.104914 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-utilities\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.206534 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-catalog-content\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.206638 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvw9m\" (UniqueName: \"kubernetes.io/projected/a8685ea7-9223-467f-aa16-c300e37458a6-kube-api-access-xvw9m\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.206729 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-utilities\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.207098 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-catalog-content\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.207213 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-utilities\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.248851 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvw9m\" (UniqueName: \"kubernetes.io/projected/a8685ea7-9223-467f-aa16-c300e37458a6-kube-api-access-xvw9m\") pod \"certified-operators-gp9rt\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.543407 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:25:57 crc kubenswrapper[4931]: I0130 06:25:57.807590 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp9rt"] Jan 30 06:25:58 crc kubenswrapper[4931]: I0130 06:25:58.727887 4931 generic.go:334] "Generic (PLEG): container finished" podID="a8685ea7-9223-467f-aa16-c300e37458a6" containerID="b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9" exitCode=0 Jan 30 06:25:58 crc kubenswrapper[4931]: I0130 06:25:58.727955 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9rt" event={"ID":"a8685ea7-9223-467f-aa16-c300e37458a6","Type":"ContainerDied","Data":"b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9"} Jan 30 06:25:58 crc kubenswrapper[4931]: I0130 06:25:58.727999 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9rt" event={"ID":"a8685ea7-9223-467f-aa16-c300e37458a6","Type":"ContainerStarted","Data":"146130ba3b835b68dc312b2d6298eca310c31f68dc9e2518f2d63a78ca8e23b3"} Jan 30 06:25:59 crc kubenswrapper[4931]: I0130 06:25:59.740217 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9rt" event={"ID":"a8685ea7-9223-467f-aa16-c300e37458a6","Type":"ContainerStarted","Data":"8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46"} Jan 30 06:26:00 crc kubenswrapper[4931]: I0130 06:26:00.756379 4931 generic.go:334] "Generic (PLEG): container finished" podID="a8685ea7-9223-467f-aa16-c300e37458a6" containerID="8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46" exitCode=0 Jan 30 06:26:00 crc kubenswrapper[4931]: I0130 06:26:00.756478 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9rt" event={"ID":"a8685ea7-9223-467f-aa16-c300e37458a6","Type":"ContainerDied","Data":"8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46"} Jan 30 06:26:02 crc kubenswrapper[4931]: I0130 06:26:02.786033 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9rt" event={"ID":"a8685ea7-9223-467f-aa16-c300e37458a6","Type":"ContainerStarted","Data":"e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071"} Jan 30 06:26:02 crc kubenswrapper[4931]: I0130 06:26:02.831141 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gp9rt" podStartSLOduration=4.426415297 podStartE2EDuration="6.831111531s" podCreationTimestamp="2026-01-30 06:25:56 +0000 UTC" firstStartedPulling="2026-01-30 06:25:58.733039564 +0000 UTC m=+4694.102949861" lastFinishedPulling="2026-01-30 06:26:01.137735798 +0000 UTC m=+4696.507646095" observedRunningTime="2026-01-30 06:26:02.823607621 +0000 UTC m=+4698.193517968" watchObservedRunningTime="2026-01-30 06:26:02.831111531 +0000 UTC m=+4698.201021828" Jan 30 06:26:07 crc kubenswrapper[4931]: I0130 06:26:07.543697 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:26:07 crc kubenswrapper[4931]: I0130 06:26:07.544532 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:26:07 crc kubenswrapper[4931]: I0130 06:26:07.617211 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:26:07 crc kubenswrapper[4931]: I0130 06:26:07.927309 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:26:07 crc kubenswrapper[4931]: I0130 06:26:07.994133 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gp9rt"] Jan 30 06:26:09 crc kubenswrapper[4931]: I0130 06:26:09.851553 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gp9rt" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="registry-server" containerID="cri-o://e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071" gracePeriod=2 Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.387530 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.543464 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-catalog-content\") pod \"a8685ea7-9223-467f-aa16-c300e37458a6\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.543521 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-utilities\") pod \"a8685ea7-9223-467f-aa16-c300e37458a6\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.543570 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvw9m\" (UniqueName: \"kubernetes.io/projected/a8685ea7-9223-467f-aa16-c300e37458a6-kube-api-access-xvw9m\") pod \"a8685ea7-9223-467f-aa16-c300e37458a6\" (UID: \"a8685ea7-9223-467f-aa16-c300e37458a6\") " Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.546207 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-utilities" (OuterVolumeSpecName: "utilities") pod "a8685ea7-9223-467f-aa16-c300e37458a6" (UID: "a8685ea7-9223-467f-aa16-c300e37458a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.550042 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8685ea7-9223-467f-aa16-c300e37458a6-kube-api-access-xvw9m" (OuterVolumeSpecName: "kube-api-access-xvw9m") pod "a8685ea7-9223-467f-aa16-c300e37458a6" (UID: "a8685ea7-9223-467f-aa16-c300e37458a6"). InnerVolumeSpecName "kube-api-access-xvw9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.645363 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvw9m\" (UniqueName: \"kubernetes.io/projected/a8685ea7-9223-467f-aa16-c300e37458a6-kube-api-access-xvw9m\") on node \"crc\" DevicePath \"\"" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.645395 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.871537 4931 generic.go:334] "Generic (PLEG): container finished" podID="a8685ea7-9223-467f-aa16-c300e37458a6" containerID="e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071" exitCode=0 Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.871625 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9rt" event={"ID":"a8685ea7-9223-467f-aa16-c300e37458a6","Type":"ContainerDied","Data":"e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071"} Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.871639 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9rt" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.871692 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9rt" event={"ID":"a8685ea7-9223-467f-aa16-c300e37458a6","Type":"ContainerDied","Data":"146130ba3b835b68dc312b2d6298eca310c31f68dc9e2518f2d63a78ca8e23b3"} Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.871731 4931 scope.go:117] "RemoveContainer" containerID="e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.913768 4931 scope.go:117] "RemoveContainer" containerID="8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46" Jan 30 06:26:10 crc kubenswrapper[4931]: I0130 06:26:10.959992 4931 scope.go:117] "RemoveContainer" containerID="b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.031224 4931 scope.go:117] "RemoveContainer" containerID="e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071" Jan 30 06:26:11 crc kubenswrapper[4931]: E0130 06:26:11.032278 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071\": container with ID starting with e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071 not found: ID does not exist" containerID="e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.032343 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071"} err="failed to get container status \"e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071\": rpc error: code = NotFound desc = could not find container \"e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071\": container with ID starting with e0ec2dd37b22b9e1e57abf9e5cace96a2831aa2924c90e60c36810dfd0a9f071 not found: ID does not exist" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.032383 4931 scope.go:117] "RemoveContainer" containerID="8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46" Jan 30 06:26:11 crc kubenswrapper[4931]: E0130 06:26:11.032912 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46\": container with ID starting with 8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46 not found: ID does not exist" containerID="8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.032958 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46"} err="failed to get container status \"8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46\": rpc error: code = NotFound desc = could not find container \"8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46\": container with ID starting with 8151a207868259506fa8d97beb80eae49c90d31fc1799495821d0c42c53b9c46 not found: ID does not exist" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.032985 4931 scope.go:117] "RemoveContainer" containerID="b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9" Jan 30 06:26:11 crc kubenswrapper[4931]: E0130 06:26:11.033396 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9\": container with ID starting with b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9 not found: ID does not exist" containerID="b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.033459 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9"} err="failed to get container status \"b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9\": rpc error: code = NotFound desc = could not find container \"b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9\": container with ID starting with b4c888e2b470037ed9cd1997c3b6ae2d408007800ca0928273fd3046d6d08cb9 not found: ID does not exist" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.078332 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8685ea7-9223-467f-aa16-c300e37458a6" (UID: "a8685ea7-9223-467f-aa16-c300e37458a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.154176 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8685ea7-9223-467f-aa16-c300e37458a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.221133 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gp9rt"] Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.228356 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gp9rt"] Jan 30 06:26:11 crc kubenswrapper[4931]: I0130 06:26:11.440055 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" path="/var/lib/kubelet/pods/a8685ea7-9223-467f-aa16-c300e37458a6/volumes" Jan 30 06:26:27 crc kubenswrapper[4931]: I0130 06:26:27.363641 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:26:27 crc kubenswrapper[4931]: I0130 06:26:27.364326 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:26:53 crc kubenswrapper[4931]: I0130 06:26:53.678197 4931 scope.go:117] "RemoveContainer" containerID="7d286e4ff9e3a5d29e83c4a7e4320e5360dd3ec6c72cd95a6b0fdf400bac7103" Jan 30 06:26:57 crc kubenswrapper[4931]: I0130 06:26:57.363632 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:26:57 crc kubenswrapper[4931]: I0130 06:26:57.364111 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.363944 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.365560 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.365623 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.366253 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c01137c96244e8746673f3822245d96905f563a3e4aa39b2a8d7db22e60ff5b"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.366308 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://2c01137c96244e8746673f3822245d96905f563a3e4aa39b2a8d7db22e60ff5b" gracePeriod=600 Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.633067 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="2c01137c96244e8746673f3822245d96905f563a3e4aa39b2a8d7db22e60ff5b" exitCode=0 Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.633179 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"2c01137c96244e8746673f3822245d96905f563a3e4aa39b2a8d7db22e60ff5b"} Jan 30 06:27:27 crc kubenswrapper[4931]: I0130 06:27:27.633578 4931 scope.go:117] "RemoveContainer" containerID="77d9af72e686de38fde2b8554c979e8b7ae6fb2e13737ff4434c70e44692f229" Jan 30 06:27:28 crc kubenswrapper[4931]: I0130 06:27:28.647249 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b"} Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.031901 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vq4hz"] Jan 30 06:27:55 crc kubenswrapper[4931]: E0130 06:27:55.032939 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="extract-utilities" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.032961 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="extract-utilities" Jan 30 06:27:55 crc kubenswrapper[4931]: E0130 06:27:55.032980 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="extract-content" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.032992 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="extract-content" Jan 30 06:27:55 crc kubenswrapper[4931]: E0130 06:27:55.033035 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="registry-server" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.033048 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="registry-server" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.033304 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8685ea7-9223-467f-aa16-c300e37458a6" containerName="registry-server" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.035181 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.079770 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vq4hz"] Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.109415 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-catalog-content\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.109906 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-utilities\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.110009 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvc8g\" (UniqueName: \"kubernetes.io/projected/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-kube-api-access-pvc8g\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.211629 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-utilities\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.211679 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvc8g\" (UniqueName: \"kubernetes.io/projected/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-kube-api-access-pvc8g\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.211799 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-catalog-content\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.212326 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-catalog-content\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.212634 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-utilities\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.304876 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvc8g\" (UniqueName: \"kubernetes.io/projected/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-kube-api-access-pvc8g\") pod \"community-operators-vq4hz\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.385480 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:27:55 crc kubenswrapper[4931]: I0130 06:27:55.932294 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vq4hz"] Jan 30 06:27:56 crc kubenswrapper[4931]: I0130 06:27:56.927177 4931 generic.go:334] "Generic (PLEG): container finished" podID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerID="7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6" exitCode=0 Jan 30 06:27:56 crc kubenswrapper[4931]: I0130 06:27:56.927299 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq4hz" event={"ID":"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b","Type":"ContainerDied","Data":"7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6"} Jan 30 06:27:56 crc kubenswrapper[4931]: I0130 06:27:56.927593 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq4hz" event={"ID":"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b","Type":"ContainerStarted","Data":"10231b9edebacb1de68f829de0d15b844091430d6bd68dead34457355f358e40"} Jan 30 06:27:58 crc kubenswrapper[4931]: I0130 06:27:58.947101 4931 generic.go:334] "Generic (PLEG): container finished" podID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerID="c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f" exitCode=0 Jan 30 06:27:58 crc kubenswrapper[4931]: I0130 06:27:58.947212 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq4hz" event={"ID":"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b","Type":"ContainerDied","Data":"c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f"} Jan 30 06:27:59 crc kubenswrapper[4931]: I0130 06:27:59.970830 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq4hz" event={"ID":"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b","Type":"ContainerStarted","Data":"3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf"} Jan 30 06:28:01 crc kubenswrapper[4931]: I0130 06:28:01.001172 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vq4hz" podStartSLOduration=4.21075302 podStartE2EDuration="7.001154481s" podCreationTimestamp="2026-01-30 06:27:54 +0000 UTC" firstStartedPulling="2026-01-30 06:27:56.932402453 +0000 UTC m=+4812.302312740" lastFinishedPulling="2026-01-30 06:27:59.722803914 +0000 UTC m=+4815.092714201" observedRunningTime="2026-01-30 06:28:00.994014852 +0000 UTC m=+4816.363925129" watchObservedRunningTime="2026-01-30 06:28:01.001154481 +0000 UTC m=+4816.371064758" Jan 30 06:28:05 crc kubenswrapper[4931]: I0130 06:28:05.386214 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:28:05 crc kubenswrapper[4931]: I0130 06:28:05.387130 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:28:05 crc kubenswrapper[4931]: I0130 06:28:05.447375 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:28:06 crc kubenswrapper[4931]: I0130 06:28:06.092530 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:28:06 crc kubenswrapper[4931]: I0130 06:28:06.165634 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vq4hz"] Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.039008 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vq4hz" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="registry-server" containerID="cri-o://3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf" gracePeriod=2 Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.785623 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.804872 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-catalog-content\") pod \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.804961 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvc8g\" (UniqueName: \"kubernetes.io/projected/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-kube-api-access-pvc8g\") pod \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.805112 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-utilities\") pod \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\" (UID: \"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b\") " Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.806035 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-utilities" (OuterVolumeSpecName: "utilities") pod "c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" (UID: "c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.814233 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-kube-api-access-pvc8g" (OuterVolumeSpecName: "kube-api-access-pvc8g") pod "c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" (UID: "c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b"). InnerVolumeSpecName "kube-api-access-pvc8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.907899 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvc8g\" (UniqueName: \"kubernetes.io/projected/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-kube-api-access-pvc8g\") on node \"crc\" DevicePath \"\"" Jan 30 06:28:08 crc kubenswrapper[4931]: I0130 06:28:08.907953 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.052630 4931 generic.go:334] "Generic (PLEG): container finished" podID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerID="3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf" exitCode=0 Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.052708 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq4hz" event={"ID":"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b","Type":"ContainerDied","Data":"3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf"} Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.052748 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vq4hz" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.052789 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vq4hz" event={"ID":"c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b","Type":"ContainerDied","Data":"10231b9edebacb1de68f829de0d15b844091430d6bd68dead34457355f358e40"} Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.052822 4931 scope.go:117] "RemoveContainer" containerID="3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.088201 4931 scope.go:117] "RemoveContainer" containerID="c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.120777 4931 scope.go:117] "RemoveContainer" containerID="7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.164452 4931 scope.go:117] "RemoveContainer" containerID="3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf" Jan 30 06:28:09 crc kubenswrapper[4931]: E0130 06:28:09.165013 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf\": container with ID starting with 3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf not found: ID does not exist" containerID="3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.165076 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf"} err="failed to get container status \"3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf\": rpc error: code = NotFound desc = could not find container \"3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf\": container with ID starting with 3649ec64523234caeca32f0820ba131802cf5e688cbcb0f82056acb9e312cfaf not found: ID does not exist" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.165115 4931 scope.go:117] "RemoveContainer" containerID="c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f" Jan 30 06:28:09 crc kubenswrapper[4931]: E0130 06:28:09.165732 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f\": container with ID starting with c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f not found: ID does not exist" containerID="c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.165818 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f"} err="failed to get container status \"c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f\": rpc error: code = NotFound desc = could not find container \"c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f\": container with ID starting with c223b5a5d94837a49b52c559a51252c5dfa68877ed7384b6c7103d28fc93de4f not found: ID does not exist" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.165871 4931 scope.go:117] "RemoveContainer" containerID="7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6" Jan 30 06:28:09 crc kubenswrapper[4931]: E0130 06:28:09.166525 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6\": container with ID starting with 7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6 not found: ID does not exist" containerID="7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.166606 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6"} err="failed to get container status \"7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6\": rpc error: code = NotFound desc = could not find container \"7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6\": container with ID starting with 7c08337cd5832dab4df7a7ecf0ffe08370dc38255d2a6ae0680fbc33f40952d6 not found: ID does not exist" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.233243 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" (UID: "c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.316191 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.400528 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vq4hz"] Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.411400 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vq4hz"] Jan 30 06:28:09 crc kubenswrapper[4931]: I0130 06:28:09.432569 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" path="/var/lib/kubelet/pods/c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b/volumes" Jan 30 06:29:27 crc kubenswrapper[4931]: I0130 06:29:27.363616 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:29:27 crc kubenswrapper[4931]: I0130 06:29:27.364532 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.678595 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 06:29:50 crc kubenswrapper[4931]: E0130 06:29:50.680100 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="extract-utilities" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.680131 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="extract-utilities" Jan 30 06:29:50 crc kubenswrapper[4931]: E0130 06:29:50.680180 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="extract-content" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.680198 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="extract-content" Jan 30 06:29:50 crc kubenswrapper[4931]: E0130 06:29:50.680227 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="registry-server" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.680245 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="registry-server" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.680620 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a8277a-ce11-4fbf-ab16-7f95c7b2a97b" containerName="registry-server" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.681732 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.685976 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qqbkw" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.692351 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.792942 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrs2\" (UniqueName: \"kubernetes.io/projected/371cff3f-3d31-4dc6-98eb-b03f2d967337-kube-api-access-kkrs2\") pod \"mariadb-copy-data\" (UID: \"371cff3f-3d31-4dc6-98eb-b03f2d967337\") " pod="openstack/mariadb-copy-data" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.793078 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\") pod \"mariadb-copy-data\" (UID: \"371cff3f-3d31-4dc6-98eb-b03f2d967337\") " pod="openstack/mariadb-copy-data" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.895136 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrs2\" (UniqueName: \"kubernetes.io/projected/371cff3f-3d31-4dc6-98eb-b03f2d967337-kube-api-access-kkrs2\") pod \"mariadb-copy-data\" (UID: \"371cff3f-3d31-4dc6-98eb-b03f2d967337\") " pod="openstack/mariadb-copy-data" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.895515 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\") pod \"mariadb-copy-data\" (UID: \"371cff3f-3d31-4dc6-98eb-b03f2d967337\") " pod="openstack/mariadb-copy-data" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.902396 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.902656 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\") pod \"mariadb-copy-data\" (UID: \"371cff3f-3d31-4dc6-98eb-b03f2d967337\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/288f6df6449974b404cf65913d6950f1694034a208c1d00e9450880132f599b0/globalmount\"" pod="openstack/mariadb-copy-data" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.923669 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrs2\" (UniqueName: \"kubernetes.io/projected/371cff3f-3d31-4dc6-98eb-b03f2d967337-kube-api-access-kkrs2\") pod \"mariadb-copy-data\" (UID: \"371cff3f-3d31-4dc6-98eb-b03f2d967337\") " pod="openstack/mariadb-copy-data" Jan 30 06:29:50 crc kubenswrapper[4931]: I0130 06:29:50.942088 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9322c3d3-7eeb-4461-b5eb-df57a66b6935\") pod \"mariadb-copy-data\" (UID: \"371cff3f-3d31-4dc6-98eb-b03f2d967337\") " pod="openstack/mariadb-copy-data" Jan 30 06:29:51 crc kubenswrapper[4931]: I0130 06:29:51.008315 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 30 06:29:51 crc kubenswrapper[4931]: I0130 06:29:51.612061 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 30 06:29:51 crc kubenswrapper[4931]: W0130 06:29:51.811907 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod371cff3f_3d31_4dc6_98eb_b03f2d967337.slice/crio-408220d8d1998f399ef671bd3376facd5e3c82722ed4e84405b357afb42ccc95 WatchSource:0}: Error finding container 408220d8d1998f399ef671bd3376facd5e3c82722ed4e84405b357afb42ccc95: Status 404 returned error can't find the container with id 408220d8d1998f399ef671bd3376facd5e3c82722ed4e84405b357afb42ccc95 Jan 30 06:29:52 crc kubenswrapper[4931]: I0130 06:29:52.211053 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"371cff3f-3d31-4dc6-98eb-b03f2d967337","Type":"ContainerStarted","Data":"f1ba84c68713736ab6df303984300eb188f441a6bdf8bef551520210f094feca"} Jan 30 06:29:52 crc kubenswrapper[4931]: I0130 06:29:52.211409 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"371cff3f-3d31-4dc6-98eb-b03f2d967337","Type":"ContainerStarted","Data":"408220d8d1998f399ef671bd3376facd5e3c82722ed4e84405b357afb42ccc95"} Jan 30 06:29:52 crc kubenswrapper[4931]: I0130 06:29:52.230311 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.230290961 podStartE2EDuration="3.230290961s" podCreationTimestamp="2026-01-30 06:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:29:52.229184639 +0000 UTC m=+4927.599094906" watchObservedRunningTime="2026-01-30 06:29:52.230290961 +0000 UTC m=+4927.600201228" Jan 30 06:29:53 crc kubenswrapper[4931]: I0130 06:29:53.843585 4931 scope.go:117] "RemoveContainer" containerID="71a8124b599814d410f6d79c3260191d88bc5b88a1405b9bfb832aebcb013dc4" Jan 30 06:29:55 crc kubenswrapper[4931]: I0130 06:29:55.224174 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:55 crc kubenswrapper[4931]: I0130 06:29:55.226186 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:29:55 crc kubenswrapper[4931]: I0130 06:29:55.238916 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:55 crc kubenswrapper[4931]: I0130 06:29:55.370501 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtfm\" (UniqueName: \"kubernetes.io/projected/78316e93-d485-4836-824e-42cbe23eb625-kube-api-access-7qtfm\") pod \"mariadb-client\" (UID: \"78316e93-d485-4836-824e-42cbe23eb625\") " pod="openstack/mariadb-client" Jan 30 06:29:55 crc kubenswrapper[4931]: I0130 06:29:55.472056 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qtfm\" (UniqueName: \"kubernetes.io/projected/78316e93-d485-4836-824e-42cbe23eb625-kube-api-access-7qtfm\") pod \"mariadb-client\" (UID: \"78316e93-d485-4836-824e-42cbe23eb625\") " pod="openstack/mariadb-client" Jan 30 06:29:55 crc kubenswrapper[4931]: I0130 06:29:55.505574 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qtfm\" (UniqueName: \"kubernetes.io/projected/78316e93-d485-4836-824e-42cbe23eb625-kube-api-access-7qtfm\") pod \"mariadb-client\" (UID: \"78316e93-d485-4836-824e-42cbe23eb625\") " pod="openstack/mariadb-client" Jan 30 06:29:55 crc kubenswrapper[4931]: I0130 06:29:55.587936 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:29:56 crc kubenswrapper[4931]: I0130 06:29:56.091941 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:56 crc kubenswrapper[4931]: W0130 06:29:56.096203 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78316e93_d485_4836_824e_42cbe23eb625.slice/crio-2ce03d4660d6878d1ae82ff00b48e5693133d8305792a3c56103245b882c80a4 WatchSource:0}: Error finding container 2ce03d4660d6878d1ae82ff00b48e5693133d8305792a3c56103245b882c80a4: Status 404 returned error can't find the container with id 2ce03d4660d6878d1ae82ff00b48e5693133d8305792a3c56103245b882c80a4 Jan 30 06:29:56 crc kubenswrapper[4931]: I0130 06:29:56.294035 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"78316e93-d485-4836-824e-42cbe23eb625","Type":"ContainerStarted","Data":"2ce03d4660d6878d1ae82ff00b48e5693133d8305792a3c56103245b882c80a4"} Jan 30 06:29:57 crc kubenswrapper[4931]: I0130 06:29:57.305658 4931 generic.go:334] "Generic (PLEG): container finished" podID="78316e93-d485-4836-824e-42cbe23eb625" containerID="c20d2d48ca6794144eddad5037d464e6a9ffdad2028bd7ef00590c377c6183ff" exitCode=0 Jan 30 06:29:57 crc kubenswrapper[4931]: I0130 06:29:57.305899 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"78316e93-d485-4836-824e-42cbe23eb625","Type":"ContainerDied","Data":"c20d2d48ca6794144eddad5037d464e6a9ffdad2028bd7ef00590c377c6183ff"} Jan 30 06:29:57 crc kubenswrapper[4931]: I0130 06:29:57.362963 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:29:57 crc kubenswrapper[4931]: I0130 06:29:57.363028 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.679105 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.705842 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_78316e93-d485-4836-824e-42cbe23eb625/mariadb-client/0.log" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.735090 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.739937 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.823457 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qtfm\" (UniqueName: \"kubernetes.io/projected/78316e93-d485-4836-824e-42cbe23eb625-kube-api-access-7qtfm\") pod \"78316e93-d485-4836-824e-42cbe23eb625\" (UID: \"78316e93-d485-4836-824e-42cbe23eb625\") " Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.828658 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78316e93-d485-4836-824e-42cbe23eb625-kube-api-access-7qtfm" (OuterVolumeSpecName: "kube-api-access-7qtfm") pod "78316e93-d485-4836-824e-42cbe23eb625" (UID: "78316e93-d485-4836-824e-42cbe23eb625"). InnerVolumeSpecName "kube-api-access-7qtfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.913288 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:59 crc kubenswrapper[4931]: E0130 06:29:58.913635 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78316e93-d485-4836-824e-42cbe23eb625" containerName="mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.913649 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="78316e93-d485-4836-824e-42cbe23eb625" containerName="mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.913788 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="78316e93-d485-4836-824e-42cbe23eb625" containerName="mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.914243 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.918950 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:58.925020 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qtfm\" (UniqueName: \"kubernetes.io/projected/78316e93-d485-4836-824e-42cbe23eb625-kube-api-access-7qtfm\") on node \"crc\" DevicePath \"\"" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.025814 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvt4\" (UniqueName: \"kubernetes.io/projected/e7624857-270e-497b-b3b1-51df662ce3dc-kube-api-access-vlvt4\") pod \"mariadb-client\" (UID: \"e7624857-270e-497b-b3b1-51df662ce3dc\") " pod="openstack/mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.127016 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvt4\" (UniqueName: \"kubernetes.io/projected/e7624857-270e-497b-b3b1-51df662ce3dc-kube-api-access-vlvt4\") pod \"mariadb-client\" (UID: \"e7624857-270e-497b-b3b1-51df662ce3dc\") " pod="openstack/mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.148003 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvt4\" (UniqueName: \"kubernetes.io/projected/e7624857-270e-497b-b3b1-51df662ce3dc-kube-api-access-vlvt4\") pod \"mariadb-client\" (UID: \"e7624857-270e-497b-b3b1-51df662ce3dc\") " pod="openstack/mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.233311 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.329396 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce03d4660d6878d1ae82ff00b48e5693133d8305792a3c56103245b882c80a4" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.329746 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.357027 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="78316e93-d485-4836-824e-42cbe23eb625" podUID="e7624857-270e-497b-b3b1-51df662ce3dc" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.437823 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78316e93-d485-4836-824e-42cbe23eb625" path="/var/lib/kubelet/pods/78316e93-d485-4836-824e-42cbe23eb625/volumes" Jan 30 06:29:59 crc kubenswrapper[4931]: I0130 06:29:59.681605 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:29:59 crc kubenswrapper[4931]: W0130 06:29:59.684463 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7624857_270e_497b_b3b1_51df662ce3dc.slice/crio-e553feece4667a023db5fed463f69eb82294d8b6104fd7b3fc9694171a2ab0d5 WatchSource:0}: Error finding container e553feece4667a023db5fed463f69eb82294d8b6104fd7b3fc9694171a2ab0d5: Status 404 returned error can't find the container with id e553feece4667a023db5fed463f69eb82294d8b6104fd7b3fc9694171a2ab0d5 Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.147381 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2"] Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.149594 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.155524 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.155944 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.174711 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2"] Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.247149 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96da373-c61f-4a59-9311-65f140a354a4-secret-volume\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.247297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96da373-c61f-4a59-9311-65f140a354a4-config-volume\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.247353 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7nnv\" (UniqueName: \"kubernetes.io/projected/e96da373-c61f-4a59-9311-65f140a354a4-kube-api-access-x7nnv\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.341631 4931 generic.go:334] "Generic (PLEG): container finished" podID="e7624857-270e-497b-b3b1-51df662ce3dc" containerID="173f20cd392f59bb3d09e8d879e9a2c54ad0461fcc8850325a690b330805f7aa" exitCode=0 Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.341705 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e7624857-270e-497b-b3b1-51df662ce3dc","Type":"ContainerDied","Data":"173f20cd392f59bb3d09e8d879e9a2c54ad0461fcc8850325a690b330805f7aa"} Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.341753 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"e7624857-270e-497b-b3b1-51df662ce3dc","Type":"ContainerStarted","Data":"e553feece4667a023db5fed463f69eb82294d8b6104fd7b3fc9694171a2ab0d5"} Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.349350 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7nnv\" (UniqueName: \"kubernetes.io/projected/e96da373-c61f-4a59-9311-65f140a354a4-kube-api-access-x7nnv\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.349518 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96da373-c61f-4a59-9311-65f140a354a4-secret-volume\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.349581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96da373-c61f-4a59-9311-65f140a354a4-config-volume\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.350680 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96da373-c61f-4a59-9311-65f140a354a4-config-volume\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.361042 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96da373-c61f-4a59-9311-65f140a354a4-secret-volume\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.382299 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7nnv\" (UniqueName: \"kubernetes.io/projected/e96da373-c61f-4a59-9311-65f140a354a4-kube-api-access-x7nnv\") pod \"collect-profiles-29495910-ccdq2\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.482414 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:00 crc kubenswrapper[4931]: I0130 06:30:00.970894 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2"] Jan 30 06:30:01 crc kubenswrapper[4931]: I0130 06:30:01.350300 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" event={"ID":"e96da373-c61f-4a59-9311-65f140a354a4","Type":"ContainerStarted","Data":"705db526c932889de3f11f056c165dba633c441108b3e93fffe75f611e076e31"} Jan 30 06:30:01 crc kubenswrapper[4931]: I0130 06:30:01.350363 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" event={"ID":"e96da373-c61f-4a59-9311-65f140a354a4","Type":"ContainerStarted","Data":"29394f67334e83e67eda76d071643e6a82eb5e42f9506be9f8b3cba2f0463934"} Jan 30 06:30:01 crc kubenswrapper[4931]: I0130 06:30:01.372820 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" podStartSLOduration=1.372798006 podStartE2EDuration="1.372798006s" podCreationTimestamp="2026-01-30 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:30:01.370305264 +0000 UTC m=+4936.740215531" watchObservedRunningTime="2026-01-30 06:30:01.372798006 +0000 UTC m=+4936.742708263" Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.060010 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.083412 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_e7624857-270e-497b-b3b1-51df662ce3dc/mariadb-client/0.log" Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.115052 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.122286 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.179151 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlvt4\" (UniqueName: \"kubernetes.io/projected/e7624857-270e-497b-b3b1-51df662ce3dc-kube-api-access-vlvt4\") pod \"e7624857-270e-497b-b3b1-51df662ce3dc\" (UID: \"e7624857-270e-497b-b3b1-51df662ce3dc\") " Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.185701 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7624857-270e-497b-b3b1-51df662ce3dc-kube-api-access-vlvt4" (OuterVolumeSpecName: "kube-api-access-vlvt4") pod "e7624857-270e-497b-b3b1-51df662ce3dc" (UID: "e7624857-270e-497b-b3b1-51df662ce3dc"). InnerVolumeSpecName "kube-api-access-vlvt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.281657 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlvt4\" (UniqueName: \"kubernetes.io/projected/e7624857-270e-497b-b3b1-51df662ce3dc-kube-api-access-vlvt4\") on node \"crc\" DevicePath \"\"" Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.360189 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e553feece4667a023db5fed463f69eb82294d8b6104fd7b3fc9694171a2ab0d5" Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.360296 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.363041 4931 generic.go:334] "Generic (PLEG): container finished" podID="e96da373-c61f-4a59-9311-65f140a354a4" containerID="705db526c932889de3f11f056c165dba633c441108b3e93fffe75f611e076e31" exitCode=0 Jan 30 06:30:02 crc kubenswrapper[4931]: I0130 06:30:02.363076 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" event={"ID":"e96da373-c61f-4a59-9311-65f140a354a4","Type":"ContainerDied","Data":"705db526c932889de3f11f056c165dba633c441108b3e93fffe75f611e076e31"} Jan 30 06:30:03 crc kubenswrapper[4931]: I0130 06:30:03.436858 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7624857-270e-497b-b3b1-51df662ce3dc" path="/var/lib/kubelet/pods/e7624857-270e-497b-b3b1-51df662ce3dc/volumes" Jan 30 06:30:03 crc kubenswrapper[4931]: I0130 06:30:03.908055 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.007235 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96da373-c61f-4a59-9311-65f140a354a4-config-volume\") pod \"e96da373-c61f-4a59-9311-65f140a354a4\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.007694 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96da373-c61f-4a59-9311-65f140a354a4-secret-volume\") pod \"e96da373-c61f-4a59-9311-65f140a354a4\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.008013 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7nnv\" (UniqueName: \"kubernetes.io/projected/e96da373-c61f-4a59-9311-65f140a354a4-kube-api-access-x7nnv\") pod \"e96da373-c61f-4a59-9311-65f140a354a4\" (UID: \"e96da373-c61f-4a59-9311-65f140a354a4\") " Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.008469 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96da373-c61f-4a59-9311-65f140a354a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "e96da373-c61f-4a59-9311-65f140a354a4" (UID: "e96da373-c61f-4a59-9311-65f140a354a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.016072 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96da373-c61f-4a59-9311-65f140a354a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e96da373-c61f-4a59-9311-65f140a354a4" (UID: "e96da373-c61f-4a59-9311-65f140a354a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.016685 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96da373-c61f-4a59-9311-65f140a354a4-kube-api-access-x7nnv" (OuterVolumeSpecName: "kube-api-access-x7nnv") pod "e96da373-c61f-4a59-9311-65f140a354a4" (UID: "e96da373-c61f-4a59-9311-65f140a354a4"). InnerVolumeSpecName "kube-api-access-x7nnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.110696 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7nnv\" (UniqueName: \"kubernetes.io/projected/e96da373-c61f-4a59-9311-65f140a354a4-kube-api-access-x7nnv\") on node \"crc\" DevicePath \"\"" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.110745 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96da373-c61f-4a59-9311-65f140a354a4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.110758 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e96da373-c61f-4a59-9311-65f140a354a4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.385052 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" event={"ID":"e96da373-c61f-4a59-9311-65f140a354a4","Type":"ContainerDied","Data":"29394f67334e83e67eda76d071643e6a82eb5e42f9506be9f8b3cba2f0463934"} Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.385146 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29394f67334e83e67eda76d071643e6a82eb5e42f9506be9f8b3cba2f0463934" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.385094 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495910-ccdq2" Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.457792 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n"] Jan 30 06:30:04 crc kubenswrapper[4931]: I0130 06:30:04.470807 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495865-q2t6n"] Jan 30 06:30:05 crc kubenswrapper[4931]: I0130 06:30:05.436548 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ad7b66-28c5-436b-9dc4-86be3d48787b" path="/var/lib/kubelet/pods/71ad7b66-28c5-436b-9dc4-86be3d48787b/volumes" Jan 30 06:30:27 crc kubenswrapper[4931]: I0130 06:30:27.363040 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:30:27 crc kubenswrapper[4931]: I0130 06:30:27.363601 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:30:27 crc kubenswrapper[4931]: I0130 06:30:27.363647 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:30:27 crc kubenswrapper[4931]: I0130 06:30:27.364330 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:30:27 crc kubenswrapper[4931]: I0130 06:30:27.364386 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" gracePeriod=600 Jan 30 06:30:28 crc kubenswrapper[4931]: E0130 06:30:28.591115 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:30:28 crc kubenswrapper[4931]: I0130 06:30:28.623772 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" exitCode=0 Jan 30 06:30:28 crc kubenswrapper[4931]: I0130 06:30:28.623872 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b"} Jan 30 06:30:28 crc kubenswrapper[4931]: I0130 06:30:28.624083 4931 scope.go:117] "RemoveContainer" containerID="2c01137c96244e8746673f3822245d96905f563a3e4aa39b2a8d7db22e60ff5b" Jan 30 06:30:28 crc kubenswrapper[4931]: I0130 06:30:28.624657 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:30:28 crc kubenswrapper[4931]: E0130 06:30:28.624935 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.770319 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6b9gc"] Jan 30 06:30:32 crc kubenswrapper[4931]: E0130 06:30:32.771038 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96da373-c61f-4a59-9311-65f140a354a4" containerName="collect-profiles" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.771050 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96da373-c61f-4a59-9311-65f140a354a4" containerName="collect-profiles" Jan 30 06:30:32 crc kubenswrapper[4931]: E0130 06:30:32.771076 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7624857-270e-497b-b3b1-51df662ce3dc" containerName="mariadb-client" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.771082 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7624857-270e-497b-b3b1-51df662ce3dc" containerName="mariadb-client" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.771216 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96da373-c61f-4a59-9311-65f140a354a4" containerName="collect-profiles" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.771226 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7624857-270e-497b-b3b1-51df662ce3dc" containerName="mariadb-client" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.772335 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.776198 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b9gc"] Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.976332 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-catalog-content\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.976489 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-utilities\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:32 crc kubenswrapper[4931]: I0130 06:30:32.976804 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfcln\" (UniqueName: \"kubernetes.io/projected/99ddeac4-7ac5-423d-8eba-59f5162fa8df-kube-api-access-pfcln\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.078168 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfcln\" (UniqueName: \"kubernetes.io/projected/99ddeac4-7ac5-423d-8eba-59f5162fa8df-kube-api-access-pfcln\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.078243 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-catalog-content\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.078292 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-utilities\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.078735 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-utilities\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.078981 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-catalog-content\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.098943 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfcln\" (UniqueName: \"kubernetes.io/projected/99ddeac4-7ac5-423d-8eba-59f5162fa8df-kube-api-access-pfcln\") pod \"redhat-operators-6b9gc\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.144231 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.577185 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6b9gc"] Jan 30 06:30:33 crc kubenswrapper[4931]: I0130 06:30:33.673284 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b9gc" event={"ID":"99ddeac4-7ac5-423d-8eba-59f5162fa8df","Type":"ContainerStarted","Data":"1de6c21a8cc92620f7e298031965ad6e56e27bd6a60708eec9d6fd5a55666e05"} Jan 30 06:30:34 crc kubenswrapper[4931]: I0130 06:30:34.686126 4931 generic.go:334] "Generic (PLEG): container finished" podID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerID="4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f" exitCode=0 Jan 30 06:30:34 crc kubenswrapper[4931]: I0130 06:30:34.686186 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b9gc" event={"ID":"99ddeac4-7ac5-423d-8eba-59f5162fa8df","Type":"ContainerDied","Data":"4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f"} Jan 30 06:30:34 crc kubenswrapper[4931]: I0130 06:30:34.689735 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:30:36 crc kubenswrapper[4931]: I0130 06:30:36.705955 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b9gc" event={"ID":"99ddeac4-7ac5-423d-8eba-59f5162fa8df","Type":"ContainerStarted","Data":"bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac"} Jan 30 06:30:37 crc kubenswrapper[4931]: I0130 06:30:37.719490 4931 generic.go:334] "Generic (PLEG): container finished" podID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerID="bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac" exitCode=0 Jan 30 06:30:37 crc kubenswrapper[4931]: I0130 06:30:37.719541 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b9gc" event={"ID":"99ddeac4-7ac5-423d-8eba-59f5162fa8df","Type":"ContainerDied","Data":"bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac"} Jan 30 06:30:38 crc kubenswrapper[4931]: I0130 06:30:38.728508 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b9gc" event={"ID":"99ddeac4-7ac5-423d-8eba-59f5162fa8df","Type":"ContainerStarted","Data":"a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760"} Jan 30 06:30:38 crc kubenswrapper[4931]: I0130 06:30:38.756256 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6b9gc" podStartSLOduration=3.079488651 podStartE2EDuration="6.756226891s" podCreationTimestamp="2026-01-30 06:30:32 +0000 UTC" firstStartedPulling="2026-01-30 06:30:34.689448777 +0000 UTC m=+4970.059359044" lastFinishedPulling="2026-01-30 06:30:38.366187027 +0000 UTC m=+4973.736097284" observedRunningTime="2026-01-30 06:30:38.7499109 +0000 UTC m=+4974.119821187" watchObservedRunningTime="2026-01-30 06:30:38.756226891 +0000 UTC m=+4974.126137168" Jan 30 06:30:43 crc kubenswrapper[4931]: I0130 06:30:43.144652 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:43 crc kubenswrapper[4931]: I0130 06:30:43.144937 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:44 crc kubenswrapper[4931]: I0130 06:30:44.214569 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6b9gc" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="registry-server" probeResult="failure" output=< Jan 30 06:30:44 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 06:30:44 crc kubenswrapper[4931]: > Jan 30 06:30:44 crc kubenswrapper[4931]: I0130 06:30:44.422216 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:30:44 crc kubenswrapper[4931]: E0130 06:30:44.422452 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:30:53 crc kubenswrapper[4931]: I0130 06:30:53.204749 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:53 crc kubenswrapper[4931]: I0130 06:30:53.279457 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:53 crc kubenswrapper[4931]: I0130 06:30:53.450814 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b9gc"] Jan 30 06:30:53 crc kubenswrapper[4931]: I0130 06:30:53.914733 4931 scope.go:117] "RemoveContainer" containerID="0262628a4935b4dc10f986d98e7493ff62eab4841805fce6eb8783a9ef5f62e3" Jan 30 06:30:54 crc kubenswrapper[4931]: I0130 06:30:54.891730 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6b9gc" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="registry-server" containerID="cri-o://a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760" gracePeriod=2 Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.759724 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.861363 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-utilities\") pod \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.861562 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfcln\" (UniqueName: \"kubernetes.io/projected/99ddeac4-7ac5-423d-8eba-59f5162fa8df-kube-api-access-pfcln\") pod \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.861632 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-catalog-content\") pod \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\" (UID: \"99ddeac4-7ac5-423d-8eba-59f5162fa8df\") " Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.862797 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-utilities" (OuterVolumeSpecName: "utilities") pod "99ddeac4-7ac5-423d-8eba-59f5162fa8df" (UID: "99ddeac4-7ac5-423d-8eba-59f5162fa8df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.870309 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ddeac4-7ac5-423d-8eba-59f5162fa8df-kube-api-access-pfcln" (OuterVolumeSpecName: "kube-api-access-pfcln") pod "99ddeac4-7ac5-423d-8eba-59f5162fa8df" (UID: "99ddeac4-7ac5-423d-8eba-59f5162fa8df"). InnerVolumeSpecName "kube-api-access-pfcln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.904521 4931 generic.go:334] "Generic (PLEG): container finished" podID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerID="a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760" exitCode=0 Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.904573 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b9gc" event={"ID":"99ddeac4-7ac5-423d-8eba-59f5162fa8df","Type":"ContainerDied","Data":"a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760"} Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.904595 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6b9gc" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.904620 4931 scope.go:117] "RemoveContainer" containerID="a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.904606 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6b9gc" event={"ID":"99ddeac4-7ac5-423d-8eba-59f5162fa8df","Type":"ContainerDied","Data":"1de6c21a8cc92620f7e298031965ad6e56e27bd6a60708eec9d6fd5a55666e05"} Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.941469 4931 scope.go:117] "RemoveContainer" containerID="bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.962767 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.962809 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfcln\" (UniqueName: \"kubernetes.io/projected/99ddeac4-7ac5-423d-8eba-59f5162fa8df-kube-api-access-pfcln\") on node \"crc\" DevicePath \"\"" Jan 30 06:30:55 crc kubenswrapper[4931]: I0130 06:30:55.972652 4931 scope.go:117] "RemoveContainer" containerID="4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.020385 4931 scope.go:117] "RemoveContainer" containerID="a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760" Jan 30 06:30:56 crc kubenswrapper[4931]: E0130 06:30:56.020925 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760\": container with ID starting with a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760 not found: ID does not exist" containerID="a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.020953 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760"} err="failed to get container status \"a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760\": rpc error: code = NotFound desc = could not find container \"a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760\": container with ID starting with a5cad92a5a23adb264c0785ebc4f46dcc7a51b1aaa9e8165b6fabd6e6aa56760 not found: ID does not exist" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.020973 4931 scope.go:117] "RemoveContainer" containerID="bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac" Jan 30 06:30:56 crc kubenswrapper[4931]: E0130 06:30:56.021343 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac\": container with ID starting with bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac not found: ID does not exist" containerID="bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.021371 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac"} err="failed to get container status \"bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac\": rpc error: code = NotFound desc = could not find container \"bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac\": container with ID starting with bf2670308f447f9ad8224e13b2d9d4ca3519f70e61f61c7aad9f77a0366e7eac not found: ID does not exist" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.021386 4931 scope.go:117] "RemoveContainer" containerID="4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f" Jan 30 06:30:56 crc kubenswrapper[4931]: E0130 06:30:56.021735 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f\": container with ID starting with 4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f not found: ID does not exist" containerID="4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.021761 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f"} err="failed to get container status \"4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f\": rpc error: code = NotFound desc = could not find container \"4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f\": container with ID starting with 4356cf57ad96373bd7eaa048120ef15b76fbf1b09320028365330caad7053f7f not found: ID does not exist" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.037159 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99ddeac4-7ac5-423d-8eba-59f5162fa8df" (UID: "99ddeac4-7ac5-423d-8eba-59f5162fa8df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.063898 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ddeac4-7ac5-423d-8eba-59f5162fa8df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.256085 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6b9gc"] Jan 30 06:30:56 crc kubenswrapper[4931]: I0130 06:30:56.261344 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6b9gc"] Jan 30 06:30:57 crc kubenswrapper[4931]: I0130 06:30:57.437036 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" path="/var/lib/kubelet/pods/99ddeac4-7ac5-423d-8eba-59f5162fa8df/volumes" Jan 30 06:30:58 crc kubenswrapper[4931]: I0130 06:30:58.422388 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:30:58 crc kubenswrapper[4931]: E0130 06:30:58.423217 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:31:12 crc kubenswrapper[4931]: I0130 06:31:12.421737 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:31:12 crc kubenswrapper[4931]: E0130 06:31:12.422762 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:31:23 crc kubenswrapper[4931]: I0130 06:31:23.421927 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:31:23 crc kubenswrapper[4931]: E0130 06:31:23.422784 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:31:38 crc kubenswrapper[4931]: I0130 06:31:38.422490 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:31:38 crc kubenswrapper[4931]: E0130 06:31:38.425349 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:31:48 crc kubenswrapper[4931]: I0130 06:31:48.107747 4931 trace.go:236] Trace[1878925583]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (30-Jan-2026 06:31:47.081) (total time: 1026ms): Jan 30 06:31:48 crc kubenswrapper[4931]: Trace[1878925583]: [1.026658415s] [1.026658415s] END Jan 30 06:31:49 crc kubenswrapper[4931]: I0130 06:31:49.423987 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:31:49 crc kubenswrapper[4931]: E0130 06:31:49.424553 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:31:56 crc kubenswrapper[4931]: E0130 06:31:56.771555 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:44662->38.102.83.179:45103: write tcp 38.102.83.179:44662->38.102.83.179:45103: write: broken pipe Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.108406 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 06:31:59 crc kubenswrapper[4931]: E0130 06:31:59.109112 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="registry-server" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.109130 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="registry-server" Jan 30 06:31:59 crc kubenswrapper[4931]: E0130 06:31:59.109146 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="extract-utilities" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.109154 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="extract-utilities" Jan 30 06:31:59 crc kubenswrapper[4931]: E0130 06:31:59.109174 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="extract-content" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.109184 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="extract-content" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.109372 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ddeac4-7ac5-423d-8eba-59f5162fa8df" containerName="registry-server" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.110292 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.118270 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.118783 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.119177 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-5gfqb" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.127400 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.147978 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.160309 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.179238 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.182289 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.215727 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.222058 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.268858 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83787678-4305-4893-8aa4-d1ddd8c15343-config\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.268910 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83787678-4305-4893-8aa4-d1ddd8c15343-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.268942 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.268975 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7l2x\" (UniqueName: \"kubernetes.io/projected/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-kube-api-access-f7l2x\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269018 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-config\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269048 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc86399-3f01-4c6a-942d-b255a957dc52-config\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269074 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269176 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83787678-4305-4893-8aa4-d1ddd8c15343-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269222 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269281 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82lrm\" (UniqueName: \"kubernetes.io/projected/83787678-4305-4893-8aa4-d1ddd8c15343-kube-api-access-82lrm\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269358 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269377 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269398 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clwbf\" (UniqueName: \"kubernetes.io/projected/ffc86399-3f01-4c6a-942d-b255a957dc52-kube-api-access-clwbf\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269431 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffc86399-3f01-4c6a-942d-b255a957dc52-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269464 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffc86399-3f01-4c6a-942d-b255a957dc52-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269489 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc86399-3f01-4c6a-942d-b255a957dc52-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.269506 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83787678-4305-4893-8aa4-d1ddd8c15343-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.283056 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.284482 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.286503 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jvt4p" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.286630 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.287038 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.310525 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.321589 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.323896 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.331980 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.334912 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.337783 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.350163 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.370989 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83787678-4305-4893-8aa4-d1ddd8c15343-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371041 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371067 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gp68\" (UniqueName: \"kubernetes.io/projected/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-kube-api-access-7gp68\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371085 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82lrm\" (UniqueName: \"kubernetes.io/projected/83787678-4305-4893-8aa4-d1ddd8c15343-kube-api-access-82lrm\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371114 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371152 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371171 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371189 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371206 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clwbf\" (UniqueName: \"kubernetes.io/projected/ffc86399-3f01-4c6a-942d-b255a957dc52-kube-api-access-clwbf\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371224 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371244 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffc86399-3f01-4c6a-942d-b255a957dc52-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371265 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffc86399-3f01-4c6a-942d-b255a957dc52-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371279 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc86399-3f01-4c6a-942d-b255a957dc52-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371292 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83787678-4305-4893-8aa4-d1ddd8c15343-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371317 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-config\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371340 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83787678-4305-4893-8aa4-d1ddd8c15343-config\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371357 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83787678-4305-4893-8aa4-d1ddd8c15343-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371374 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371410 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371440 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7l2x\" (UniqueName: \"kubernetes.io/projected/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-kube-api-access-f7l2x\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371470 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-config\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371491 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc86399-3f01-4c6a-942d-b255a957dc52-config\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371507 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371873 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffc86399-3f01-4c6a-942d-b255a957dc52-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.371944 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.372237 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/83787678-4305-4893-8aa4-d1ddd8c15343-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.372912 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-config\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.373562 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc86399-3f01-4c6a-942d-b255a957dc52-config\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.373846 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83787678-4305-4893-8aa4-d1ddd8c15343-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.374246 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.374545 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83787678-4305-4893-8aa4-d1ddd8c15343-config\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.374606 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffc86399-3f01-4c6a-942d-b255a957dc52-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.379078 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83787678-4305-4893-8aa4-d1ddd8c15343-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.379824 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.379852 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/38ceb5621d454c1692b720cf30f10e3b664762114fdc3e7d5b38f883b904b6d8/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.379898 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.379934 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc86399-3f01-4c6a-942d-b255a957dc52-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.379932 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/542e0ca3a7e896c286d5fb36340e8db4366e8a6a9e4a26986a8767cc153e14e9/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.380142 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.380160 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0df4613f538e56a2f714ec1b65ac6bc28b7d7e300b847859a521a3448da838ee/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.384905 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.391265 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7l2x\" (UniqueName: \"kubernetes.io/projected/78634c9d-d8d8-4eed-adc7-fe9fdbf69a11-kube-api-access-f7l2x\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.391861 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clwbf\" (UniqueName: \"kubernetes.io/projected/ffc86399-3f01-4c6a-942d-b255a957dc52-kube-api-access-clwbf\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.396612 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82lrm\" (UniqueName: \"kubernetes.io/projected/83787678-4305-4893-8aa4-d1ddd8c15343-kube-api-access-82lrm\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.407899 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0e9ceaa-7ade-4f8a-bd83-6f37f450fbc6\") pod \"ovsdbserver-sb-2\" (UID: \"83787678-4305-4893-8aa4-d1ddd8c15343\") " pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.409820 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0aaad663-0830-4310-a7ba-80cfdf78a81e\") pod \"ovsdbserver-sb-0\" (UID: \"ffc86399-3f01-4c6a-942d-b255a957dc52\") " pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.410087 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2318bf44-a8fd-4089-a284-9ff294284d8e\") pod \"ovsdbserver-sb-1\" (UID: \"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11\") " pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.449552 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473062 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473120 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-config\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473160 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473183 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60983391-7945-4efe-ae6d-7c6ae80e2df8-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473209 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473234 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473268 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60983391-7945-4efe-ae6d-7c6ae80e2df8-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473318 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e651c73-1761-4cda-83b7-5a80fa3af6f4-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e651c73-1761-4cda-83b7-5a80fa3af6f4-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473362 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn774\" (UniqueName: \"kubernetes.io/projected/60983391-7945-4efe-ae6d-7c6ae80e2df8-kube-api-access-mn774\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473435 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gp68\" (UniqueName: \"kubernetes.io/projected/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-kube-api-access-7gp68\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473476 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60983391-7945-4efe-ae6d-7c6ae80e2df8-config\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473508 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvkx\" (UniqueName: \"kubernetes.io/projected/6e651c73-1761-4cda-83b7-5a80fa3af6f4-kube-api-access-9xvkx\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473538 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e651c73-1761-4cda-83b7-5a80fa3af6f4-config\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473592 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473614 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60983391-7945-4efe-ae6d-7c6ae80e2df8-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.473644 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e651c73-1761-4cda-83b7-5a80fa3af6f4-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.474270 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-config\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.474354 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.475089 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.476527 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.476564 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1fb14bde885f1ccb5a7f001f4547bab047e87e1e6bfcc6b23b1fd9ec2ced19f1/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.477972 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.486995 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.492801 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gp68\" (UniqueName: \"kubernetes.io/projected/a05ee1a7-c012-4766-8d48-3b508d4f8cd2-kube-api-access-7gp68\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.509568 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-779531dd-d96b-4d2a-bdb2-66dcf36f819e\") pod \"ovsdbserver-nb-0\" (UID: \"a05ee1a7-c012-4766-8d48-3b508d4f8cd2\") " pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.523797 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575001 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60983391-7945-4efe-ae6d-7c6ae80e2df8-config\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575382 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvkx\" (UniqueName: \"kubernetes.io/projected/6e651c73-1761-4cda-83b7-5a80fa3af6f4-kube-api-access-9xvkx\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575433 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e651c73-1761-4cda-83b7-5a80fa3af6f4-config\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575482 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60983391-7945-4efe-ae6d-7c6ae80e2df8-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575515 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e651c73-1761-4cda-83b7-5a80fa3af6f4-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575554 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575594 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60983391-7945-4efe-ae6d-7c6ae80e2df8-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575620 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575655 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60983391-7945-4efe-ae6d-7c6ae80e2df8-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575701 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e651c73-1761-4cda-83b7-5a80fa3af6f4-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575721 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e651c73-1761-4cda-83b7-5a80fa3af6f4-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.575779 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn774\" (UniqueName: \"kubernetes.io/projected/60983391-7945-4efe-ae6d-7c6ae80e2df8-kube-api-access-mn774\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.576219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60983391-7945-4efe-ae6d-7c6ae80e2df8-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.576307 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e651c73-1761-4cda-83b7-5a80fa3af6f4-config\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.576636 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e651c73-1761-4cda-83b7-5a80fa3af6f4-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.577179 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60983391-7945-4efe-ae6d-7c6ae80e2df8-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.578794 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60983391-7945-4efe-ae6d-7c6ae80e2df8-config\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.579069 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.579109 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d65f24d1bc535aa5e17a6778d651148a816e7ea430f8275e0cc96cf15566781c/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.579277 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.579310 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/190f236266ca958f7a75057eeaca477769ef49c449509c364a9625d5cefac56c/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.586113 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e651c73-1761-4cda-83b7-5a80fa3af6f4-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.586303 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60983391-7945-4efe-ae6d-7c6ae80e2df8-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.591508 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e651c73-1761-4cda-83b7-5a80fa3af6f4-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.598782 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvkx\" (UniqueName: \"kubernetes.io/projected/6e651c73-1761-4cda-83b7-5a80fa3af6f4-kube-api-access-9xvkx\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.610758 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn774\" (UniqueName: \"kubernetes.io/projected/60983391-7945-4efe-ae6d-7c6ae80e2df8-kube-api-access-mn774\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.613107 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7c5ea30-aac7-470e-8fac-d4800be43434\") pod \"ovsdbserver-nb-2\" (UID: \"60983391-7945-4efe-ae6d-7c6ae80e2df8\") " pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.616652 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5a42d7d3-d189-4f42-920b-cb33dd5ab83b\") pod \"ovsdbserver-nb-1\" (UID: \"6e651c73-1761-4cda-83b7-5a80fa3af6f4\") " pod="openstack/ovsdbserver-nb-1" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.624598 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.658761 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 30 06:31:59 crc kubenswrapper[4931]: I0130 06:31:59.665720 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.019258 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.129766 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.215586 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.422465 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:32:00 crc kubenswrapper[4931]: E0130 06:32:00.422952 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.488018 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11","Type":"ContainerStarted","Data":"f8fe52323f9c565992ae49c5761a423a277d792d28745a9d5c213776cbc6f203"} Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.489538 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ffc86399-3f01-4c6a-942d-b255a957dc52","Type":"ContainerStarted","Data":"635b99e9ac33ac446dad4dec7dab360cedda8cf2f3dd5235c3061e01fc59154c"} Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.492168 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"83787678-4305-4893-8aa4-d1ddd8c15343","Type":"ContainerStarted","Data":"55cf12283c75a6bfec709c842346840b484deeea772453851bdf7ddca790cb95"} Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.836314 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 30 06:32:00 crc kubenswrapper[4931]: W0130 06:32:00.844252 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60983391_7945_4efe_ae6d_7c6ae80e2df8.slice/crio-0ef299b8cff4ed4f3c0da7b0f9cf13517fb2cb74d4316f5878117a686e945840 WatchSource:0}: Error finding container 0ef299b8cff4ed4f3c0da7b0f9cf13517fb2cb74d4316f5878117a686e945840: Status 404 returned error can't find the container with id 0ef299b8cff4ed4f3c0da7b0f9cf13517fb2cb74d4316f5878117a686e945840 Jan 30 06:32:00 crc kubenswrapper[4931]: I0130 06:32:00.983558 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 30 06:32:00 crc kubenswrapper[4931]: W0130 06:32:00.992393 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e651c73_1761_4cda_83b7_5a80fa3af6f4.slice/crio-1496e030a874e43133b8be8391443ccb9a7aa903acecd2b41639c0b1993fe1f9 WatchSource:0}: Error finding container 1496e030a874e43133b8be8391443ccb9a7aa903acecd2b41639c0b1993fe1f9: Status 404 returned error can't find the container with id 1496e030a874e43133b8be8391443ccb9a7aa903acecd2b41639c0b1993fe1f9 Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.299630 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 06:32:01 crc kubenswrapper[4931]: W0130 06:32:01.299925 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05ee1a7_c012_4766_8d48_3b508d4f8cd2.slice/crio-d1346e33e80a02288e81934c023d42a1fb25770c363f448e46a776f8431d5f87 WatchSource:0}: Error finding container d1346e33e80a02288e81934c023d42a1fb25770c363f448e46a776f8431d5f87: Status 404 returned error can't find the container with id d1346e33e80a02288e81934c023d42a1fb25770c363f448e46a776f8431d5f87 Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.502998 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"60983391-7945-4efe-ae6d-7c6ae80e2df8","Type":"ContainerStarted","Data":"01c71b8f3a33ee73298e57d599111b20b88828a20aa1da43a4b20be7c4387d06"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.503051 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"60983391-7945-4efe-ae6d-7c6ae80e2df8","Type":"ContainerStarted","Data":"31caf1041cf37cf962e9d2b83850f5f354cc0afb4bdf81669ba1bbd50b0bbe78"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.503066 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"60983391-7945-4efe-ae6d-7c6ae80e2df8","Type":"ContainerStarted","Data":"0ef299b8cff4ed4f3c0da7b0f9cf13517fb2cb74d4316f5878117a686e945840"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.504341 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11","Type":"ContainerStarted","Data":"4f12060011670fe522a97b40f56bf9d2a90759d74f09b317785487900e74d7db"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.504392 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"78634c9d-d8d8-4eed-adc7-fe9fdbf69a11","Type":"ContainerStarted","Data":"c871d6390c1e6a86d07486d70659af4126b81190649deba63824081281713824"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.507856 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ffc86399-3f01-4c6a-942d-b255a957dc52","Type":"ContainerStarted","Data":"26e82e9bee92974c55056f63e2d4f94a18bf881e9e5653b9428fdf0212c0ed99"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.507893 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ffc86399-3f01-4c6a-942d-b255a957dc52","Type":"ContainerStarted","Data":"a8ed37e9fdc67d5e6115c2e9955c4ee9a40a50e46f2229729d527f86cdff4778"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.509672 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"83787678-4305-4893-8aa4-d1ddd8c15343","Type":"ContainerStarted","Data":"b6167738d8312c199a5c0c4f4ee2b22c6d683af17022db00425e1ce30b4f2501"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.509704 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"83787678-4305-4893-8aa4-d1ddd8c15343","Type":"ContainerStarted","Data":"4d224c8fcd758cd92a7aef687ec318745ec7670c5feb656d3acb2ff97fc3fb87"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.511185 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a05ee1a7-c012-4766-8d48-3b508d4f8cd2","Type":"ContainerStarted","Data":"d1346e33e80a02288e81934c023d42a1fb25770c363f448e46a776f8431d5f87"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.512678 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"6e651c73-1761-4cda-83b7-5a80fa3af6f4","Type":"ContainerStarted","Data":"bb9e29a43f2b8697926e39bee430d1df985a5bf7c849bb28641863b0ea609a4a"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.512700 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"6e651c73-1761-4cda-83b7-5a80fa3af6f4","Type":"ContainerStarted","Data":"90da14e6adc2c79bf540a8391491eb7642d8f94543af62652a399ece12ea1dce"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.512710 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"6e651c73-1761-4cda-83b7-5a80fa3af6f4","Type":"ContainerStarted","Data":"1496e030a874e43133b8be8391443ccb9a7aa903acecd2b41639c0b1993fe1f9"} Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.524138 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.524116073 podStartE2EDuration="3.524116073s" podCreationTimestamp="2026-01-30 06:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:01.521371444 +0000 UTC m=+5056.891281731" watchObservedRunningTime="2026-01-30 06:32:01.524116073 +0000 UTC m=+5056.894026330" Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.557342 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.557322706 podStartE2EDuration="3.557322706s" podCreationTimestamp="2026-01-30 06:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:01.545155677 +0000 UTC m=+5056.915065924" watchObservedRunningTime="2026-01-30 06:32:01.557322706 +0000 UTC m=+5056.927232973" Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.568474 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.5684580649999997 podStartE2EDuration="3.568458065s" podCreationTimestamp="2026-01-30 06:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:01.566123438 +0000 UTC m=+5056.936033695" watchObservedRunningTime="2026-01-30 06:32:01.568458065 +0000 UTC m=+5056.938368322" Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.593368 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.59335308 podStartE2EDuration="3.59335308s" podCreationTimestamp="2026-01-30 06:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:01.588557912 +0000 UTC m=+5056.958468169" watchObservedRunningTime="2026-01-30 06:32:01.59335308 +0000 UTC m=+5056.963263337" Jan 30 06:32:01 crc kubenswrapper[4931]: I0130 06:32:01.620615 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.620591522 podStartE2EDuration="3.620591522s" podCreationTimestamp="2026-01-30 06:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:01.61704831 +0000 UTC m=+5056.986958567" watchObservedRunningTime="2026-01-30 06:32:01.620591522 +0000 UTC m=+5056.990501799" Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.449989 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.487415 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.523116 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a05ee1a7-c012-4766-8d48-3b508d4f8cd2","Type":"ContainerStarted","Data":"1ed8a4f374983be1cab867ee0d2de08020dd1e52f75d07cd64012203969c23c3"} Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.523187 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a05ee1a7-c012-4766-8d48-3b508d4f8cd2","Type":"ContainerStarted","Data":"1abd673149956f76172a016558a841e76630b83138e04eef0eec03140deed6ac"} Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.523948 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.561388 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.561363811 podStartE2EDuration="4.561363811s" podCreationTimestamp="2026-01-30 06:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:02.54877238 +0000 UTC m=+5057.918682717" watchObservedRunningTime="2026-01-30 06:32:02.561363811 +0000 UTC m=+5057.931274108" Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.625686 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.659615 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 30 06:32:02 crc kubenswrapper[4931]: I0130 06:32:02.665868 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 30 06:32:04 crc kubenswrapper[4931]: I0130 06:32:04.450559 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 06:32:04 crc kubenswrapper[4931]: I0130 06:32:04.487926 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 30 06:32:04 crc kubenswrapper[4931]: I0130 06:32:04.525239 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 30 06:32:04 crc kubenswrapper[4931]: I0130 06:32:04.625809 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 06:32:04 crc kubenswrapper[4931]: I0130 06:32:04.659273 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 30 06:32:04 crc kubenswrapper[4931]: I0130 06:32:04.666090 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.501531 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.572880 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.589254 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.622805 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.627734 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.651551 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.705108 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.743036 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.748927 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.835500 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-lzfwn"] Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.838214 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.841867 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.846780 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-lzfwn"] Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.936869 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-config\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.936944 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-ovsdbserver-sb\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.937363 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-dns-svc\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:05 crc kubenswrapper[4931]: I0130 06:32:05.937466 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h659w\" (UniqueName: \"kubernetes.io/projected/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-kube-api-access-h659w\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.038532 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-dns-svc\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.038632 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h659w\" (UniqueName: \"kubernetes.io/projected/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-kube-api-access-h659w\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.038673 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-config\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.038696 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-ovsdbserver-sb\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.039583 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-config\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.039589 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-ovsdbserver-sb\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.042068 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-dns-svc\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.061212 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h659w\" (UniqueName: \"kubernetes.io/projected/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-kube-api-access-h659w\") pod \"dnsmasq-dns-df6c6d7b7-lzfwn\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.170367 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.594812 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-lzfwn"] Jan 30 06:32:06 crc kubenswrapper[4931]: W0130 06:32:06.601694 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe396dfb_9a46_4bb7_9374_3ffe00f58db8.slice/crio-a84c445d4b480f9e8a3200bbdd1a8a6e97bfb9e01ef343ee8be5fd00bdf76110 WatchSource:0}: Error finding container a84c445d4b480f9e8a3200bbdd1a8a6e97bfb9e01ef343ee8be5fd00bdf76110: Status 404 returned error can't find the container with id a84c445d4b480f9e8a3200bbdd1a8a6e97bfb9e01ef343ee8be5fd00bdf76110 Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.602106 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.884184 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-lzfwn"] Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.918614 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666dc49759-6999t"] Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.929298 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.932896 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 06:32:06 crc kubenswrapper[4931]: I0130 06:32:06.935333 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666dc49759-6999t"] Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.061798 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-dns-svc\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.061847 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-nb\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.061907 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-config\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.061951 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-sb\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.062021 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfn7\" (UniqueName: \"kubernetes.io/projected/0e31871f-729e-4b67-98d0-96973ea90de3-kube-api-access-qcfn7\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.163190 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-config\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.163555 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-sb\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.163755 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcfn7\" (UniqueName: \"kubernetes.io/projected/0e31871f-729e-4b67-98d0-96973ea90de3-kube-api-access-qcfn7\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.163895 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-dns-svc\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.163998 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-nb\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.165071 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-nb\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.165090 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-sb\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.165097 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-dns-svc\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.165510 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-config\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.201625 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcfn7\" (UniqueName: \"kubernetes.io/projected/0e31871f-729e-4b67-98d0-96973ea90de3-kube-api-access-qcfn7\") pod \"dnsmasq-dns-666dc49759-6999t\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.258223 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.566536 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerID="0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1" exitCode=0 Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.566579 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" event={"ID":"fe396dfb-9a46-4bb7-9374-3ffe00f58db8","Type":"ContainerDied","Data":"0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1"} Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.566922 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" event={"ID":"fe396dfb-9a46-4bb7-9374-3ffe00f58db8","Type":"ContainerStarted","Data":"a84c445d4b480f9e8a3200bbdd1a8a6e97bfb9e01ef343ee8be5fd00bdf76110"} Jan 30 06:32:07 crc kubenswrapper[4931]: W0130 06:32:07.703849 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e31871f_729e_4b67_98d0_96973ea90de3.slice/crio-040800f5a3860baea453ed500ca3333fe258941fbbb6b519709bd70c1f55e9fe WatchSource:0}: Error finding container 040800f5a3860baea453ed500ca3333fe258941fbbb6b519709bd70c1f55e9fe: Status 404 returned error can't find the container with id 040800f5a3860baea453ed500ca3333fe258941fbbb6b519709bd70c1f55e9fe Jan 30 06:32:07 crc kubenswrapper[4931]: I0130 06:32:07.704582 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666dc49759-6999t"] Jan 30 06:32:08 crc kubenswrapper[4931]: I0130 06:32:08.577990 4931 generic.go:334] "Generic (PLEG): container finished" podID="0e31871f-729e-4b67-98d0-96973ea90de3" containerID="89484ad9976e7f0f1e67abb8cfa05b476c12211570ced1863487804ae3932924" exitCode=0 Jan 30 06:32:08 crc kubenswrapper[4931]: I0130 06:32:08.578321 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666dc49759-6999t" event={"ID":"0e31871f-729e-4b67-98d0-96973ea90de3","Type":"ContainerDied","Data":"89484ad9976e7f0f1e67abb8cfa05b476c12211570ced1863487804ae3932924"} Jan 30 06:32:08 crc kubenswrapper[4931]: I0130 06:32:08.578586 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666dc49759-6999t" event={"ID":"0e31871f-729e-4b67-98d0-96973ea90de3","Type":"ContainerStarted","Data":"040800f5a3860baea453ed500ca3333fe258941fbbb6b519709bd70c1f55e9fe"} Jan 30 06:32:08 crc kubenswrapper[4931]: I0130 06:32:08.583965 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" event={"ID":"fe396dfb-9a46-4bb7-9374-3ffe00f58db8","Type":"ContainerStarted","Data":"ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b"} Jan 30 06:32:08 crc kubenswrapper[4931]: I0130 06:32:08.584822 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" podUID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerName="dnsmasq-dns" containerID="cri-o://ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b" gracePeriod=10 Jan 30 06:32:08 crc kubenswrapper[4931]: I0130 06:32:08.585082 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:08 crc kubenswrapper[4931]: I0130 06:32:08.636449 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" podStartSLOduration=3.6363999700000003 podStartE2EDuration="3.63639997s" podCreationTimestamp="2026-01-30 06:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:08.629489442 +0000 UTC m=+5063.999399709" watchObservedRunningTime="2026-01-30 06:32:08.63639997 +0000 UTC m=+5064.006310237" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.013859 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.097213 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h659w\" (UniqueName: \"kubernetes.io/projected/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-kube-api-access-h659w\") pod \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.097310 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-dns-svc\") pod \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.097378 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-ovsdbserver-sb\") pod \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.097401 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-config\") pod \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\" (UID: \"fe396dfb-9a46-4bb7-9374-3ffe00f58db8\") " Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.111782 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-kube-api-access-h659w" (OuterVolumeSpecName: "kube-api-access-h659w") pod "fe396dfb-9a46-4bb7-9374-3ffe00f58db8" (UID: "fe396dfb-9a46-4bb7-9374-3ffe00f58db8"). InnerVolumeSpecName "kube-api-access-h659w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.136648 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-config" (OuterVolumeSpecName: "config") pod "fe396dfb-9a46-4bb7-9374-3ffe00f58db8" (UID: "fe396dfb-9a46-4bb7-9374-3ffe00f58db8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.138918 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe396dfb-9a46-4bb7-9374-3ffe00f58db8" (UID: "fe396dfb-9a46-4bb7-9374-3ffe00f58db8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.155397 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe396dfb-9a46-4bb7-9374-3ffe00f58db8" (UID: "fe396dfb-9a46-4bb7-9374-3ffe00f58db8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.200469 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h659w\" (UniqueName: \"kubernetes.io/projected/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-kube-api-access-h659w\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.200504 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.200620 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.200629 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe396dfb-9a46-4bb7-9374-3ffe00f58db8-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.595744 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666dc49759-6999t" event={"ID":"0e31871f-729e-4b67-98d0-96973ea90de3","Type":"ContainerStarted","Data":"287747b607e19df8febc584091bc73c9472de152802c4b7543aad2f9bb9038dd"} Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.596539 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.601216 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerID="ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b" exitCode=0 Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.601484 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" event={"ID":"fe396dfb-9a46-4bb7-9374-3ffe00f58db8","Type":"ContainerDied","Data":"ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b"} Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.601567 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" event={"ID":"fe396dfb-9a46-4bb7-9374-3ffe00f58db8","Type":"ContainerDied","Data":"a84c445d4b480f9e8a3200bbdd1a8a6e97bfb9e01ef343ee8be5fd00bdf76110"} Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.601593 4931 scope.go:117] "RemoveContainer" containerID="ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.601816 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df6c6d7b7-lzfwn" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.622222 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666dc49759-6999t" podStartSLOduration=3.622204292 podStartE2EDuration="3.622204292s" podCreationTimestamp="2026-01-30 06:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:09.616729215 +0000 UTC m=+5064.986639482" watchObservedRunningTime="2026-01-30 06:32:09.622204292 +0000 UTC m=+5064.992114559" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.627490 4931 scope.go:117] "RemoveContainer" containerID="0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.641286 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-lzfwn"] Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.659309 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-df6c6d7b7-lzfwn"] Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.664051 4931 scope.go:117] "RemoveContainer" containerID="ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b" Jan 30 06:32:09 crc kubenswrapper[4931]: E0130 06:32:09.664394 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b\": container with ID starting with ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b not found: ID does not exist" containerID="ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.664434 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b"} err="failed to get container status \"ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b\": rpc error: code = NotFound desc = could not find container \"ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b\": container with ID starting with ae734a6ebd6227f8fcf285169a0430e26869fe5fdc3662f764a1920e525cd16b not found: ID does not exist" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.664453 4931 scope.go:117] "RemoveContainer" containerID="0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1" Jan 30 06:32:09 crc kubenswrapper[4931]: E0130 06:32:09.664718 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1\": container with ID starting with 0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1 not found: ID does not exist" containerID="0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.664774 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1"} err="failed to get container status \"0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1\": rpc error: code = NotFound desc = could not find container \"0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1\": container with ID starting with 0d7e74b9ea8646e46814295903083348e0de1587c8a621c11772b41b5f2780d1 not found: ID does not exist" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.698697 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 30 06:32:09 crc kubenswrapper[4931]: I0130 06:32:09.703339 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 30 06:32:11 crc kubenswrapper[4931]: I0130 06:32:11.439628 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" path="/var/lib/kubelet/pods/fe396dfb-9a46-4bb7-9374-3ffe00f58db8/volumes" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.757791 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 30 06:32:12 crc kubenswrapper[4931]: E0130 06:32:12.758300 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerName="dnsmasq-dns" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.758324 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerName="dnsmasq-dns" Jan 30 06:32:12 crc kubenswrapper[4931]: E0130 06:32:12.758362 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerName="init" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.758378 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerName="init" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.758662 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe396dfb-9a46-4bb7-9374-3ffe00f58db8" containerName="dnsmasq-dns" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.759680 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.762860 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.774413 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.865968 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/49126964-dfd0-4103-a3fd-5244d9b49b9d-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.866089 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c442539e-eb47-4d14-9776-8bd69ac65863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c442539e-eb47-4d14-9776-8bd69ac65863\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.866190 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6f4\" (UniqueName: \"kubernetes.io/projected/49126964-dfd0-4103-a3fd-5244d9b49b9d-kube-api-access-cw6f4\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.969758 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/49126964-dfd0-4103-a3fd-5244d9b49b9d-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.969977 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c442539e-eb47-4d14-9776-8bd69ac65863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c442539e-eb47-4d14-9776-8bd69ac65863\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.970145 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw6f4\" (UniqueName: \"kubernetes.io/projected/49126964-dfd0-4103-a3fd-5244d9b49b9d-kube-api-access-cw6f4\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.975704 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.976081 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c442539e-eb47-4d14-9776-8bd69ac65863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c442539e-eb47-4d14-9776-8bd69ac65863\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66f725498011e0f8f1b50f15f3253b4003921dec61a01597fc0176e58e7ec4fd/globalmount\"" pod="openstack/ovn-copy-data" Jan 30 06:32:12 crc kubenswrapper[4931]: I0130 06:32:12.980650 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/49126964-dfd0-4103-a3fd-5244d9b49b9d-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:13 crc kubenswrapper[4931]: I0130 06:32:13.422592 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:32:13 crc kubenswrapper[4931]: E0130 06:32:13.423070 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:32:13 crc kubenswrapper[4931]: I0130 06:32:13.505232 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw6f4\" (UniqueName: \"kubernetes.io/projected/49126964-dfd0-4103-a3fd-5244d9b49b9d-kube-api-access-cw6f4\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:13 crc kubenswrapper[4931]: I0130 06:32:13.646687 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c442539e-eb47-4d14-9776-8bd69ac65863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c442539e-eb47-4d14-9776-8bd69ac65863\") pod \"ovn-copy-data\" (UID: \"49126964-dfd0-4103-a3fd-5244d9b49b9d\") " pod="openstack/ovn-copy-data" Jan 30 06:32:13 crc kubenswrapper[4931]: I0130 06:32:13.702218 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 30 06:32:14 crc kubenswrapper[4931]: I0130 06:32:14.119910 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 30 06:32:14 crc kubenswrapper[4931]: W0130 06:32:14.124529 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49126964_dfd0_4103_a3fd_5244d9b49b9d.slice/crio-6bd556cbb3b2ed9e4e6f66be6766dfb9de16e482a68ca2ec035ae43e8913d115 WatchSource:0}: Error finding container 6bd556cbb3b2ed9e4e6f66be6766dfb9de16e482a68ca2ec035ae43e8913d115: Status 404 returned error can't find the container with id 6bd556cbb3b2ed9e4e6f66be6766dfb9de16e482a68ca2ec035ae43e8913d115 Jan 30 06:32:14 crc kubenswrapper[4931]: I0130 06:32:14.657895 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"49126964-dfd0-4103-a3fd-5244d9b49b9d","Type":"ContainerStarted","Data":"6bd556cbb3b2ed9e4e6f66be6766dfb9de16e482a68ca2ec035ae43e8913d115"} Jan 30 06:32:15 crc kubenswrapper[4931]: I0130 06:32:15.670078 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"49126964-dfd0-4103-a3fd-5244d9b49b9d","Type":"ContainerStarted","Data":"0617797c0cfbc4de0b816608cfa5f5f2f1749246b5d34b737bd828707ea93886"} Jan 30 06:32:15 crc kubenswrapper[4931]: I0130 06:32:15.698969 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=4.202444561 podStartE2EDuration="4.69894336s" podCreationTimestamp="2026-01-30 06:32:11 +0000 UTC" firstStartedPulling="2026-01-30 06:32:14.126671598 +0000 UTC m=+5069.496581885" lastFinishedPulling="2026-01-30 06:32:14.623170387 +0000 UTC m=+5069.993080684" observedRunningTime="2026-01-30 06:32:15.690732395 +0000 UTC m=+5071.060642692" watchObservedRunningTime="2026-01-30 06:32:15.69894336 +0000 UTC m=+5071.068853647" Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.260691 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.339007 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8dc7b"] Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.339636 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" podUID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerName="dnsmasq-dns" containerID="cri-o://c38a0669baf43cbcb3f248e3fbd7e19a5da64c872efcf71b2a52e9bac3e9cedd" gracePeriod=10 Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.696938 4931 generic.go:334] "Generic (PLEG): container finished" podID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerID="c38a0669baf43cbcb3f248e3fbd7e19a5da64c872efcf71b2a52e9bac3e9cedd" exitCode=0 Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.696974 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" event={"ID":"df16978b-d22c-4dd1-87d8-330cf82a859d","Type":"ContainerDied","Data":"c38a0669baf43cbcb3f248e3fbd7e19a5da64c872efcf71b2a52e9bac3e9cedd"} Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.797176 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.970936 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-config\") pod \"df16978b-d22c-4dd1-87d8-330cf82a859d\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.971042 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r68qr\" (UniqueName: \"kubernetes.io/projected/df16978b-d22c-4dd1-87d8-330cf82a859d-kube-api-access-r68qr\") pod \"df16978b-d22c-4dd1-87d8-330cf82a859d\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.971255 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-dns-svc\") pod \"df16978b-d22c-4dd1-87d8-330cf82a859d\" (UID: \"df16978b-d22c-4dd1-87d8-330cf82a859d\") " Jan 30 06:32:17 crc kubenswrapper[4931]: I0130 06:32:17.996471 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df16978b-d22c-4dd1-87d8-330cf82a859d-kube-api-access-r68qr" (OuterVolumeSpecName: "kube-api-access-r68qr") pod "df16978b-d22c-4dd1-87d8-330cf82a859d" (UID: "df16978b-d22c-4dd1-87d8-330cf82a859d"). InnerVolumeSpecName "kube-api-access-r68qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.033702 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df16978b-d22c-4dd1-87d8-330cf82a859d" (UID: "df16978b-d22c-4dd1-87d8-330cf82a859d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.043405 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-config" (OuterVolumeSpecName: "config") pod "df16978b-d22c-4dd1-87d8-330cf82a859d" (UID: "df16978b-d22c-4dd1-87d8-330cf82a859d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.073161 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.073199 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r68qr\" (UniqueName: \"kubernetes.io/projected/df16978b-d22c-4dd1-87d8-330cf82a859d-kube-api-access-r68qr\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.073213 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df16978b-d22c-4dd1-87d8-330cf82a859d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.712974 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" event={"ID":"df16978b-d22c-4dd1-87d8-330cf82a859d","Type":"ContainerDied","Data":"e39274ddf419bf92df1598af572cac14fef4ddf978c753b8f221ebec54b897c7"} Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.713035 4931 scope.go:117] "RemoveContainer" containerID="c38a0669baf43cbcb3f248e3fbd7e19a5da64c872efcf71b2a52e9bac3e9cedd" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.713045 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-8dc7b" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.740195 4931 scope.go:117] "RemoveContainer" containerID="a56ecc0c98ffc762965d58506b3e81c6c6637f6a00e16f27ab2be355f3d037e0" Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.788887 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8dc7b"] Jan 30 06:32:18 crc kubenswrapper[4931]: I0130 06:32:18.801364 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-8dc7b"] Jan 30 06:32:19 crc kubenswrapper[4931]: I0130 06:32:19.437589 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df16978b-d22c-4dd1-87d8-330cf82a859d" path="/var/lib/kubelet/pods/df16978b-d22c-4dd1-87d8-330cf82a859d/volumes" Jan 30 06:32:20 crc kubenswrapper[4931]: E0130 06:32:20.494961 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:59698->38.102.83.179:45103: write tcp 38.102.83.179:59698->38.102.83.179:45103: write: broken pipe Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.324600 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 06:32:21 crc kubenswrapper[4931]: E0130 06:32:21.325570 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerName="dnsmasq-dns" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.325664 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerName="dnsmasq-dns" Jan 30 06:32:21 crc kubenswrapper[4931]: E0130 06:32:21.325742 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerName="init" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.325809 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerName="init" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.326091 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="df16978b-d22c-4dd1-87d8-330cf82a859d" containerName="dnsmasq-dns" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.327172 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.334792 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.335125 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.335125 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5bcn4" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.336536 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.433176 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wp5w\" (UniqueName: \"kubernetes.io/projected/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-kube-api-access-9wp5w\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.433236 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-config\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.433280 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.433316 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-scripts\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.433380 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.535246 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.535371 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wp5w\" (UniqueName: \"kubernetes.io/projected/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-kube-api-access-9wp5w\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.535402 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-config\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.535438 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.535457 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-scripts\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.536166 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.537063 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-config\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.537733 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-scripts\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.545212 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.551601 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wp5w\" (UniqueName: \"kubernetes.io/projected/c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe-kube-api-access-9wp5w\") pod \"ovn-northd-0\" (UID: \"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe\") " pod="openstack/ovn-northd-0" Jan 30 06:32:21 crc kubenswrapper[4931]: I0130 06:32:21.654221 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 06:32:22 crc kubenswrapper[4931]: I0130 06:32:22.190163 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 06:32:22 crc kubenswrapper[4931]: I0130 06:32:22.757507 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe","Type":"ContainerStarted","Data":"056c368bc681352e80840b63f8e913f984184e6ed487de39d00e2b5f9a6c4fe1"} Jan 30 06:32:22 crc kubenswrapper[4931]: I0130 06:32:22.758074 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe","Type":"ContainerStarted","Data":"1656972ea219ef061adb87d6947c3351e89efdbce4b92eea37cb77535ec15950"} Jan 30 06:32:22 crc kubenswrapper[4931]: I0130 06:32:22.758098 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe","Type":"ContainerStarted","Data":"7d52b8dd2614ca3683972322875ce159bcc9526f76eb6a9e1f344f059d916f3a"} Jan 30 06:32:22 crc kubenswrapper[4931]: I0130 06:32:22.758146 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 06:32:22 crc kubenswrapper[4931]: I0130 06:32:22.794495 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.794467698 podStartE2EDuration="1.794467698s" podCreationTimestamp="2026-01-30 06:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:22.782196756 +0000 UTC m=+5078.152107053" watchObservedRunningTime="2026-01-30 06:32:22.794467698 +0000 UTC m=+5078.164377985" Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.765245 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4gjjc"] Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.767386 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.781373 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9c4e-account-create-update-vz7cn"] Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.783144 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.785674 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.790275 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4gjjc"] Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.796907 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9c4e-account-create-update-vz7cn"] Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.948329 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95012d1-b402-48eb-baf4-36fabfd1e4f2-operator-scripts\") pod \"keystone-9c4e-account-create-update-vz7cn\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.948391 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5rh\" (UniqueName: \"kubernetes.io/projected/d5b58826-6a83-4c91-a9f7-8c6c861c509b-kube-api-access-zn5rh\") pod \"keystone-db-create-4gjjc\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.948516 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvc8r\" (UniqueName: \"kubernetes.io/projected/d95012d1-b402-48eb-baf4-36fabfd1e4f2-kube-api-access-xvc8r\") pod \"keystone-9c4e-account-create-update-vz7cn\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:26 crc kubenswrapper[4931]: I0130 06:32:26.948599 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b58826-6a83-4c91-a9f7-8c6c861c509b-operator-scripts\") pod \"keystone-db-create-4gjjc\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.050285 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvc8r\" (UniqueName: \"kubernetes.io/projected/d95012d1-b402-48eb-baf4-36fabfd1e4f2-kube-api-access-xvc8r\") pod \"keystone-9c4e-account-create-update-vz7cn\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.050372 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b58826-6a83-4c91-a9f7-8c6c861c509b-operator-scripts\") pod \"keystone-db-create-4gjjc\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.050413 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95012d1-b402-48eb-baf4-36fabfd1e4f2-operator-scripts\") pod \"keystone-9c4e-account-create-update-vz7cn\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.050449 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn5rh\" (UniqueName: \"kubernetes.io/projected/d5b58826-6a83-4c91-a9f7-8c6c861c509b-kube-api-access-zn5rh\") pod \"keystone-db-create-4gjjc\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.051140 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b58826-6a83-4c91-a9f7-8c6c861c509b-operator-scripts\") pod \"keystone-db-create-4gjjc\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.051331 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95012d1-b402-48eb-baf4-36fabfd1e4f2-operator-scripts\") pod \"keystone-9c4e-account-create-update-vz7cn\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.082324 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvc8r\" (UniqueName: \"kubernetes.io/projected/d95012d1-b402-48eb-baf4-36fabfd1e4f2-kube-api-access-xvc8r\") pod \"keystone-9c4e-account-create-update-vz7cn\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.083866 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn5rh\" (UniqueName: \"kubernetes.io/projected/d5b58826-6a83-4c91-a9f7-8c6c861c509b-kube-api-access-zn5rh\") pod \"keystone-db-create-4gjjc\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.116993 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.125114 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.426820 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:32:27 crc kubenswrapper[4931]: E0130 06:32:27.427220 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.576181 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9c4e-account-create-update-vz7cn"] Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.777935 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4gjjc"] Jan 30 06:32:27 crc kubenswrapper[4931]: W0130 06:32:27.801481 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5b58826_6a83_4c91_a9f7_8c6c861c509b.slice/crio-92b95bd2e19b7758033797d9e3976f9ced241ca140f8908b86261492c6bc081f WatchSource:0}: Error finding container 92b95bd2e19b7758033797d9e3976f9ced241ca140f8908b86261492c6bc081f: Status 404 returned error can't find the container with id 92b95bd2e19b7758033797d9e3976f9ced241ca140f8908b86261492c6bc081f Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.818191 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4gjjc" event={"ID":"d5b58826-6a83-4c91-a9f7-8c6c861c509b","Type":"ContainerStarted","Data":"92b95bd2e19b7758033797d9e3976f9ced241ca140f8908b86261492c6bc081f"} Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.819361 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9c4e-account-create-update-vz7cn" event={"ID":"d95012d1-b402-48eb-baf4-36fabfd1e4f2","Type":"ContainerStarted","Data":"cbeb8eeb50114e9f7323f10c636e65f223a4a4ce3cb8536dd62a68b32d0fcd46"} Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.819404 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9c4e-account-create-update-vz7cn" event={"ID":"d95012d1-b402-48eb-baf4-36fabfd1e4f2","Type":"ContainerStarted","Data":"be2c32b6f745597d8479e5ac5b6c48a807f5085a073d891cb3a28dbc0b82adf3"} Jan 30 06:32:27 crc kubenswrapper[4931]: I0130 06:32:27.853195 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-9c4e-account-create-update-vz7cn" podStartSLOduration=1.853143578 podStartE2EDuration="1.853143578s" podCreationTimestamp="2026-01-30 06:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:27.840114764 +0000 UTC m=+5083.210025061" watchObservedRunningTime="2026-01-30 06:32:27.853143578 +0000 UTC m=+5083.223053865" Jan 30 06:32:28 crc kubenswrapper[4931]: I0130 06:32:28.832416 4931 generic.go:334] "Generic (PLEG): container finished" podID="d5b58826-6a83-4c91-a9f7-8c6c861c509b" containerID="a4061e639e286e3a321d0a950315a3048946e43d437d1b9673f6d152b515bf12" exitCode=0 Jan 30 06:32:28 crc kubenswrapper[4931]: I0130 06:32:28.832591 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4gjjc" event={"ID":"d5b58826-6a83-4c91-a9f7-8c6c861c509b","Type":"ContainerDied","Data":"a4061e639e286e3a321d0a950315a3048946e43d437d1b9673f6d152b515bf12"} Jan 30 06:32:29 crc kubenswrapper[4931]: I0130 06:32:29.843970 4931 generic.go:334] "Generic (PLEG): container finished" podID="d95012d1-b402-48eb-baf4-36fabfd1e4f2" containerID="cbeb8eeb50114e9f7323f10c636e65f223a4a4ce3cb8536dd62a68b32d0fcd46" exitCode=0 Jan 30 06:32:29 crc kubenswrapper[4931]: I0130 06:32:29.844006 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9c4e-account-create-update-vz7cn" event={"ID":"d95012d1-b402-48eb-baf4-36fabfd1e4f2","Type":"ContainerDied","Data":"cbeb8eeb50114e9f7323f10c636e65f223a4a4ce3cb8536dd62a68b32d0fcd46"} Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.207571 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.313694 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn5rh\" (UniqueName: \"kubernetes.io/projected/d5b58826-6a83-4c91-a9f7-8c6c861c509b-kube-api-access-zn5rh\") pod \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.313982 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b58826-6a83-4c91-a9f7-8c6c861c509b-operator-scripts\") pod \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\" (UID: \"d5b58826-6a83-4c91-a9f7-8c6c861c509b\") " Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.314756 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b58826-6a83-4c91-a9f7-8c6c861c509b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5b58826-6a83-4c91-a9f7-8c6c861c509b" (UID: "d5b58826-6a83-4c91-a9f7-8c6c861c509b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.320623 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b58826-6a83-4c91-a9f7-8c6c861c509b-kube-api-access-zn5rh" (OuterVolumeSpecName: "kube-api-access-zn5rh") pod "d5b58826-6a83-4c91-a9f7-8c6c861c509b" (UID: "d5b58826-6a83-4c91-a9f7-8c6c861c509b"). InnerVolumeSpecName "kube-api-access-zn5rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.415495 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b58826-6a83-4c91-a9f7-8c6c861c509b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.415936 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn5rh\" (UniqueName: \"kubernetes.io/projected/d5b58826-6a83-4c91-a9f7-8c6c861c509b-kube-api-access-zn5rh\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.854973 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4gjjc" Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.855079 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4gjjc" event={"ID":"d5b58826-6a83-4c91-a9f7-8c6c861c509b","Type":"ContainerDied","Data":"92b95bd2e19b7758033797d9e3976f9ced241ca140f8908b86261492c6bc081f"} Jan 30 06:32:30 crc kubenswrapper[4931]: I0130 06:32:30.855114 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92b95bd2e19b7758033797d9e3976f9ced241ca140f8908b86261492c6bc081f" Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.274157 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.431593 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95012d1-b402-48eb-baf4-36fabfd1e4f2-operator-scripts\") pod \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.431711 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvc8r\" (UniqueName: \"kubernetes.io/projected/d95012d1-b402-48eb-baf4-36fabfd1e4f2-kube-api-access-xvc8r\") pod \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\" (UID: \"d95012d1-b402-48eb-baf4-36fabfd1e4f2\") " Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.432842 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95012d1-b402-48eb-baf4-36fabfd1e4f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d95012d1-b402-48eb-baf4-36fabfd1e4f2" (UID: "d95012d1-b402-48eb-baf4-36fabfd1e4f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.438450 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95012d1-b402-48eb-baf4-36fabfd1e4f2-kube-api-access-xvc8r" (OuterVolumeSpecName: "kube-api-access-xvc8r") pod "d95012d1-b402-48eb-baf4-36fabfd1e4f2" (UID: "d95012d1-b402-48eb-baf4-36fabfd1e4f2"). InnerVolumeSpecName "kube-api-access-xvc8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.535183 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d95012d1-b402-48eb-baf4-36fabfd1e4f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.535222 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvc8r\" (UniqueName: \"kubernetes.io/projected/d95012d1-b402-48eb-baf4-36fabfd1e4f2-kube-api-access-xvc8r\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.872464 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9c4e-account-create-update-vz7cn" event={"ID":"d95012d1-b402-48eb-baf4-36fabfd1e4f2","Type":"ContainerDied","Data":"be2c32b6f745597d8479e5ac5b6c48a807f5085a073d891cb3a28dbc0b82adf3"} Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.872508 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be2c32b6f745597d8479e5ac5b6c48a807f5085a073d891cb3a28dbc0b82adf3" Jan 30 06:32:31 crc kubenswrapper[4931]: I0130 06:32:31.872627 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9c4e-account-create-update-vz7cn" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.566523 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ckz5s"] Jan 30 06:32:32 crc kubenswrapper[4931]: E0130 06:32:32.567338 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b58826-6a83-4c91-a9f7-8c6c861c509b" containerName="mariadb-database-create" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.567368 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b58826-6a83-4c91-a9f7-8c6c861c509b" containerName="mariadb-database-create" Jan 30 06:32:32 crc kubenswrapper[4931]: E0130 06:32:32.567417 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95012d1-b402-48eb-baf4-36fabfd1e4f2" containerName="mariadb-account-create-update" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.567452 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95012d1-b402-48eb-baf4-36fabfd1e4f2" containerName="mariadb-account-create-update" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.567700 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b58826-6a83-4c91-a9f7-8c6c861c509b" containerName="mariadb-database-create" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.567733 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95012d1-b402-48eb-baf4-36fabfd1e4f2" containerName="mariadb-account-create-update" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.568639 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.574346 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.574624 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.574831 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.575210 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lg2q7" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.575962 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ckz5s"] Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.656192 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pdl\" (UniqueName: \"kubernetes.io/projected/65588685-2245-486d-b7a9-95b8a71f8ff7-kube-api-access-t9pdl\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.656292 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-config-data\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.656351 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-combined-ca-bundle\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.758059 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-combined-ca-bundle\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.758140 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pdl\" (UniqueName: \"kubernetes.io/projected/65588685-2245-486d-b7a9-95b8a71f8ff7-kube-api-access-t9pdl\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.758239 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-config-data\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.762612 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-combined-ca-bundle\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.762725 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-config-data\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.783111 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pdl\" (UniqueName: \"kubernetes.io/projected/65588685-2245-486d-b7a9-95b8a71f8ff7-kube-api-access-t9pdl\") pod \"keystone-db-sync-ckz5s\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:32 crc kubenswrapper[4931]: I0130 06:32:32.890996 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:33 crc kubenswrapper[4931]: I0130 06:32:33.419211 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ckz5s"] Jan 30 06:32:33 crc kubenswrapper[4931]: W0130 06:32:33.429409 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65588685_2245_486d_b7a9_95b8a71f8ff7.slice/crio-455c2a7f6e8e121d57f56247f3e15caf25d977cfac9d6d567b2d39e578602343 WatchSource:0}: Error finding container 455c2a7f6e8e121d57f56247f3e15caf25d977cfac9d6d567b2d39e578602343: Status 404 returned error can't find the container with id 455c2a7f6e8e121d57f56247f3e15caf25d977cfac9d6d567b2d39e578602343 Jan 30 06:32:33 crc kubenswrapper[4931]: I0130 06:32:33.890180 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ckz5s" event={"ID":"65588685-2245-486d-b7a9-95b8a71f8ff7","Type":"ContainerStarted","Data":"cdd69294edc054ababcd2665bc64a680aa08a0804663ff331148d5c4aedf9140"} Jan 30 06:32:33 crc kubenswrapper[4931]: I0130 06:32:33.890741 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ckz5s" event={"ID":"65588685-2245-486d-b7a9-95b8a71f8ff7","Type":"ContainerStarted","Data":"455c2a7f6e8e121d57f56247f3e15caf25d977cfac9d6d567b2d39e578602343"} Jan 30 06:32:33 crc kubenswrapper[4931]: I0130 06:32:33.922744 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ckz5s" podStartSLOduration=1.922710881 podStartE2EDuration="1.922710881s" podCreationTimestamp="2026-01-30 06:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:33.916127242 +0000 UTC m=+5089.286037519" watchObservedRunningTime="2026-01-30 06:32:33.922710881 +0000 UTC m=+5089.292621178" Jan 30 06:32:35 crc kubenswrapper[4931]: I0130 06:32:35.911536 4931 generic.go:334] "Generic (PLEG): container finished" podID="65588685-2245-486d-b7a9-95b8a71f8ff7" containerID="cdd69294edc054ababcd2665bc64a680aa08a0804663ff331148d5c4aedf9140" exitCode=0 Jan 30 06:32:35 crc kubenswrapper[4931]: I0130 06:32:35.911611 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ckz5s" event={"ID":"65588685-2245-486d-b7a9-95b8a71f8ff7","Type":"ContainerDied","Data":"cdd69294edc054ababcd2665bc64a680aa08a0804663ff331148d5c4aedf9140"} Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.316943 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.445032 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9pdl\" (UniqueName: \"kubernetes.io/projected/65588685-2245-486d-b7a9-95b8a71f8ff7-kube-api-access-t9pdl\") pod \"65588685-2245-486d-b7a9-95b8a71f8ff7\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.445204 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-combined-ca-bundle\") pod \"65588685-2245-486d-b7a9-95b8a71f8ff7\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.445293 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-config-data\") pod \"65588685-2245-486d-b7a9-95b8a71f8ff7\" (UID: \"65588685-2245-486d-b7a9-95b8a71f8ff7\") " Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.452219 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65588685-2245-486d-b7a9-95b8a71f8ff7-kube-api-access-t9pdl" (OuterVolumeSpecName: "kube-api-access-t9pdl") pod "65588685-2245-486d-b7a9-95b8a71f8ff7" (UID: "65588685-2245-486d-b7a9-95b8a71f8ff7"). InnerVolumeSpecName "kube-api-access-t9pdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.466781 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65588685-2245-486d-b7a9-95b8a71f8ff7" (UID: "65588685-2245-486d-b7a9-95b8a71f8ff7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.497370 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-config-data" (OuterVolumeSpecName: "config-data") pod "65588685-2245-486d-b7a9-95b8a71f8ff7" (UID: "65588685-2245-486d-b7a9-95b8a71f8ff7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.547606 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.547653 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65588685-2245-486d-b7a9-95b8a71f8ff7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.547663 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9pdl\" (UniqueName: \"kubernetes.io/projected/65588685-2245-486d-b7a9-95b8a71f8ff7-kube-api-access-t9pdl\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.937273 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ckz5s" event={"ID":"65588685-2245-486d-b7a9-95b8a71f8ff7","Type":"ContainerDied","Data":"455c2a7f6e8e121d57f56247f3e15caf25d977cfac9d6d567b2d39e578602343"} Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.937654 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455c2a7f6e8e121d57f56247f3e15caf25d977cfac9d6d567b2d39e578602343" Jan 30 06:32:37 crc kubenswrapper[4931]: I0130 06:32:37.937364 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ckz5s" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.184673 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cc57676c-79k7x"] Jan 30 06:32:38 crc kubenswrapper[4931]: E0130 06:32:38.185030 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65588685-2245-486d-b7a9-95b8a71f8ff7" containerName="keystone-db-sync" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.185044 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="65588685-2245-486d-b7a9-95b8a71f8ff7" containerName="keystone-db-sync" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.185196 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="65588685-2245-486d-b7a9-95b8a71f8ff7" containerName="keystone-db-sync" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.186091 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.198757 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cc57676c-79k7x"] Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.264803 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jmsq5"] Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.268205 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.272244 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.272476 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lg2q7" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.272942 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.276342 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.279327 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.284828 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-sb\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.284991 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-nb\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.285117 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-dns-svc\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.285155 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-config\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.285182 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd7x6\" (UniqueName: \"kubernetes.io/projected/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-kube-api-access-dd7x6\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.302413 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jmsq5"] Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386528 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-dns-svc\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386581 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-config\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386606 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd7x6\" (UniqueName: \"kubernetes.io/projected/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-kube-api-access-dd7x6\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386644 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-credential-keys\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386682 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-config-data\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386700 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-sb\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386728 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzw69\" (UniqueName: \"kubernetes.io/projected/3ab4d491-82ea-4973-a8e9-ef26ba522b43-kube-api-access-zzw69\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386767 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-combined-ca-bundle\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386794 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-scripts\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386809 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-nb\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.386830 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-fernet-keys\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.387577 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-config\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.387574 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-dns-svc\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.388245 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-nb\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.388259 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-sb\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.405566 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd7x6\" (UniqueName: \"kubernetes.io/projected/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-kube-api-access-dd7x6\") pod \"dnsmasq-dns-cc57676c-79k7x\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.422838 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:32:38 crc kubenswrapper[4931]: E0130 06:32:38.423112 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.488689 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-combined-ca-bundle\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.488752 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-scripts\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.488776 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-fernet-keys\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.489230 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-credential-keys\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.489293 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-config-data\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.489310 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzw69\" (UniqueName: \"kubernetes.io/projected/3ab4d491-82ea-4973-a8e9-ef26ba522b43-kube-api-access-zzw69\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.492172 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-scripts\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.492509 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-combined-ca-bundle\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.492799 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-config-data\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.493734 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-fernet-keys\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.495877 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-credential-keys\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.511389 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzw69\" (UniqueName: \"kubernetes.io/projected/3ab4d491-82ea-4973-a8e9-ef26ba522b43-kube-api-access-zzw69\") pod \"keystone-bootstrap-jmsq5\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.550325 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.603055 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.889899 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jmsq5"] Jan 30 06:32:38 crc kubenswrapper[4931]: W0130 06:32:38.898531 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab4d491_82ea_4973_a8e9_ef26ba522b43.slice/crio-384326f31c2144527b26181eb4cdf0836f375339b22144caee3d08cacb27cb53 WatchSource:0}: Error finding container 384326f31c2144527b26181eb4cdf0836f375339b22144caee3d08cacb27cb53: Status 404 returned error can't find the container with id 384326f31c2144527b26181eb4cdf0836f375339b22144caee3d08cacb27cb53 Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.947270 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jmsq5" event={"ID":"3ab4d491-82ea-4973-a8e9-ef26ba522b43","Type":"ContainerStarted","Data":"384326f31c2144527b26181eb4cdf0836f375339b22144caee3d08cacb27cb53"} Jan 30 06:32:38 crc kubenswrapper[4931]: I0130 06:32:38.982537 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cc57676c-79k7x"] Jan 30 06:32:38 crc kubenswrapper[4931]: W0130 06:32:38.985650 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5868fc_e1b4_4f28_ba8d_6b0d9fad2db1.slice/crio-3961340eb18072ebf266ecd7fc39a9becdb1fb9de589cabfaaf5229e8191f71b WatchSource:0}: Error finding container 3961340eb18072ebf266ecd7fc39a9becdb1fb9de589cabfaaf5229e8191f71b: Status 404 returned error can't find the container with id 3961340eb18072ebf266ecd7fc39a9becdb1fb9de589cabfaaf5229e8191f71b Jan 30 06:32:39 crc kubenswrapper[4931]: I0130 06:32:39.970297 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jmsq5" event={"ID":"3ab4d491-82ea-4973-a8e9-ef26ba522b43","Type":"ContainerStarted","Data":"eb43994040c586b7da805891db0738dbfdbd3eca5875691d38856aeb8f3f02e2"} Jan 30 06:32:39 crc kubenswrapper[4931]: I0130 06:32:39.972785 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerID="376254fe1540e485a58048ed599a0e1e2664491a5fb008a7c2926d2f38ef5153" exitCode=0 Jan 30 06:32:39 crc kubenswrapper[4931]: I0130 06:32:39.972830 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc57676c-79k7x" event={"ID":"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1","Type":"ContainerDied","Data":"376254fe1540e485a58048ed599a0e1e2664491a5fb008a7c2926d2f38ef5153"} Jan 30 06:32:39 crc kubenswrapper[4931]: I0130 06:32:39.972862 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc57676c-79k7x" event={"ID":"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1","Type":"ContainerStarted","Data":"3961340eb18072ebf266ecd7fc39a9becdb1fb9de589cabfaaf5229e8191f71b"} Jan 30 06:32:40 crc kubenswrapper[4931]: I0130 06:32:40.012034 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jmsq5" podStartSLOduration=2.012009069 podStartE2EDuration="2.012009069s" podCreationTimestamp="2026-01-30 06:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:39.998485041 +0000 UTC m=+5095.368395338" watchObservedRunningTime="2026-01-30 06:32:40.012009069 +0000 UTC m=+5095.381919356" Jan 30 06:32:40 crc kubenswrapper[4931]: I0130 06:32:40.982969 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc57676c-79k7x" event={"ID":"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1","Type":"ContainerStarted","Data":"d7ae0ff751a18019fedb3fe9938258f86e6d1759fbc3c988b2c850d94022c34d"} Jan 30 06:32:40 crc kubenswrapper[4931]: I0130 06:32:40.983332 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:41 crc kubenswrapper[4931]: I0130 06:32:41.006617 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cc57676c-79k7x" podStartSLOduration=3.006596443 podStartE2EDuration="3.006596443s" podCreationTimestamp="2026-01-30 06:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:41.004575555 +0000 UTC m=+5096.374485812" watchObservedRunningTime="2026-01-30 06:32:41.006596443 +0000 UTC m=+5096.376506700" Jan 30 06:32:41 crc kubenswrapper[4931]: I0130 06:32:41.742998 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 06:32:43 crc kubenswrapper[4931]: I0130 06:32:43.002094 4931 generic.go:334] "Generic (PLEG): container finished" podID="3ab4d491-82ea-4973-a8e9-ef26ba522b43" containerID="eb43994040c586b7da805891db0738dbfdbd3eca5875691d38856aeb8f3f02e2" exitCode=0 Jan 30 06:32:43 crc kubenswrapper[4931]: I0130 06:32:43.002163 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jmsq5" event={"ID":"3ab4d491-82ea-4973-a8e9-ef26ba522b43","Type":"ContainerDied","Data":"eb43994040c586b7da805891db0738dbfdbd3eca5875691d38856aeb8f3f02e2"} Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.446383 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.524185 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-combined-ca-bundle\") pod \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.524574 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-config-data\") pod \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.524599 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-credential-keys\") pod \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.524649 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-scripts\") pod \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.524726 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-fernet-keys\") pod \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.524876 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzw69\" (UniqueName: \"kubernetes.io/projected/3ab4d491-82ea-4973-a8e9-ef26ba522b43-kube-api-access-zzw69\") pod \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\" (UID: \"3ab4d491-82ea-4973-a8e9-ef26ba522b43\") " Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.532899 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab4d491-82ea-4973-a8e9-ef26ba522b43-kube-api-access-zzw69" (OuterVolumeSpecName: "kube-api-access-zzw69") pod "3ab4d491-82ea-4973-a8e9-ef26ba522b43" (UID: "3ab4d491-82ea-4973-a8e9-ef26ba522b43"). InnerVolumeSpecName "kube-api-access-zzw69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.551982 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-scripts" (OuterVolumeSpecName: "scripts") pod "3ab4d491-82ea-4973-a8e9-ef26ba522b43" (UID: "3ab4d491-82ea-4973-a8e9-ef26ba522b43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.552652 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3ab4d491-82ea-4973-a8e9-ef26ba522b43" (UID: "3ab4d491-82ea-4973-a8e9-ef26ba522b43"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.572712 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab4d491-82ea-4973-a8e9-ef26ba522b43" (UID: "3ab4d491-82ea-4973-a8e9-ef26ba522b43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.573553 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3ab4d491-82ea-4973-a8e9-ef26ba522b43" (UID: "3ab4d491-82ea-4973-a8e9-ef26ba522b43"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.578115 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-config-data" (OuterVolumeSpecName: "config-data") pod "3ab4d491-82ea-4973-a8e9-ef26ba522b43" (UID: "3ab4d491-82ea-4973-a8e9-ef26ba522b43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.626595 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.626641 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzw69\" (UniqueName: \"kubernetes.io/projected/3ab4d491-82ea-4973-a8e9-ef26ba522b43-kube-api-access-zzw69\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.626659 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.626673 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.626686 4931 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:44 crc kubenswrapper[4931]: I0130 06:32:44.626697 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ab4d491-82ea-4973-a8e9-ef26ba522b43-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.024142 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jmsq5" event={"ID":"3ab4d491-82ea-4973-a8e9-ef26ba522b43","Type":"ContainerDied","Data":"384326f31c2144527b26181eb4cdf0836f375339b22144caee3d08cacb27cb53"} Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.024194 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="384326f31c2144527b26181eb4cdf0836f375339b22144caee3d08cacb27cb53" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.024217 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jmsq5" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.128667 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jmsq5"] Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.137087 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jmsq5"] Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.202490 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dj4rz"] Jan 30 06:32:45 crc kubenswrapper[4931]: E0130 06:32:45.202908 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab4d491-82ea-4973-a8e9-ef26ba522b43" containerName="keystone-bootstrap" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.202930 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab4d491-82ea-4973-a8e9-ef26ba522b43" containerName="keystone-bootstrap" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.203112 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab4d491-82ea-4973-a8e9-ef26ba522b43" containerName="keystone-bootstrap" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.203798 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.209561 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.209619 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.209757 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.209809 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lg2q7" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.209937 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.223353 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dj4rz"] Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.341107 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-credential-keys\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.341168 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-fernet-keys\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.341214 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-scripts\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.341309 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-combined-ca-bundle\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.341357 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfr5\" (UniqueName: \"kubernetes.io/projected/26e6e702-ef29-49bd-836a-f46b2abd51cc-kube-api-access-zcfr5\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.341415 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-config-data\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.435998 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab4d491-82ea-4973-a8e9-ef26ba522b43" path="/var/lib/kubelet/pods/3ab4d491-82ea-4973-a8e9-ef26ba522b43/volumes" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.442305 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-combined-ca-bundle\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.442368 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfr5\" (UniqueName: \"kubernetes.io/projected/26e6e702-ef29-49bd-836a-f46b2abd51cc-kube-api-access-zcfr5\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.442419 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-config-data\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.442571 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-credential-keys\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.442599 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-fernet-keys\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.442639 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-scripts\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.450276 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-credential-keys\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.450407 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-scripts\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.450742 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-combined-ca-bundle\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.451470 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-fernet-keys\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.460332 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-config-data\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.474839 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfr5\" (UniqueName: \"kubernetes.io/projected/26e6e702-ef29-49bd-836a-f46b2abd51cc-kube-api-access-zcfr5\") pod \"keystone-bootstrap-dj4rz\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:45 crc kubenswrapper[4931]: I0130 06:32:45.558028 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:46 crc kubenswrapper[4931]: I0130 06:32:46.087368 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dj4rz"] Jan 30 06:32:47 crc kubenswrapper[4931]: I0130 06:32:47.048908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj4rz" event={"ID":"26e6e702-ef29-49bd-836a-f46b2abd51cc","Type":"ContainerStarted","Data":"4c48b12d2b0648e9dfb8706d20edad9baaeb7875a958059b6dc53ace27e47c1e"} Jan 30 06:32:47 crc kubenswrapper[4931]: I0130 06:32:47.049319 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj4rz" event={"ID":"26e6e702-ef29-49bd-836a-f46b2abd51cc","Type":"ContainerStarted","Data":"5059fe1de470f8b35ea58caed468c9ea16c78ba228c422fecb4fd0e2aa96cc2d"} Jan 30 06:32:47 crc kubenswrapper[4931]: I0130 06:32:47.089157 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dj4rz" podStartSLOduration=2.089133469 podStartE2EDuration="2.089133469s" podCreationTimestamp="2026-01-30 06:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:47.07834251 +0000 UTC m=+5102.448252817" watchObservedRunningTime="2026-01-30 06:32:47.089133469 +0000 UTC m=+5102.459043746" Jan 30 06:32:48 crc kubenswrapper[4931]: I0130 06:32:48.552756 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:32:48 crc kubenswrapper[4931]: I0130 06:32:48.625275 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666dc49759-6999t"] Jan 30 06:32:48 crc kubenswrapper[4931]: I0130 06:32:48.625625 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666dc49759-6999t" podUID="0e31871f-729e-4b67-98d0-96973ea90de3" containerName="dnsmasq-dns" containerID="cri-o://287747b607e19df8febc584091bc73c9472de152802c4b7543aad2f9bb9038dd" gracePeriod=10 Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.064560 4931 generic.go:334] "Generic (PLEG): container finished" podID="0e31871f-729e-4b67-98d0-96973ea90de3" containerID="287747b607e19df8febc584091bc73c9472de152802c4b7543aad2f9bb9038dd" exitCode=0 Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.064618 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666dc49759-6999t" event={"ID":"0e31871f-729e-4b67-98d0-96973ea90de3","Type":"ContainerDied","Data":"287747b607e19df8febc584091bc73c9472de152802c4b7543aad2f9bb9038dd"} Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.066409 4931 generic.go:334] "Generic (PLEG): container finished" podID="26e6e702-ef29-49bd-836a-f46b2abd51cc" containerID="4c48b12d2b0648e9dfb8706d20edad9baaeb7875a958059b6dc53ace27e47c1e" exitCode=0 Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.066468 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj4rz" event={"ID":"26e6e702-ef29-49bd-836a-f46b2abd51cc","Type":"ContainerDied","Data":"4c48b12d2b0648e9dfb8706d20edad9baaeb7875a958059b6dc53ace27e47c1e"} Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.143562 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.209257 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcfn7\" (UniqueName: \"kubernetes.io/projected/0e31871f-729e-4b67-98d0-96973ea90de3-kube-api-access-qcfn7\") pod \"0e31871f-729e-4b67-98d0-96973ea90de3\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.209332 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-config\") pod \"0e31871f-729e-4b67-98d0-96973ea90de3\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.209370 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-dns-svc\") pod \"0e31871f-729e-4b67-98d0-96973ea90de3\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.209573 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-nb\") pod \"0e31871f-729e-4b67-98d0-96973ea90de3\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.209632 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-sb\") pod \"0e31871f-729e-4b67-98d0-96973ea90de3\" (UID: \"0e31871f-729e-4b67-98d0-96973ea90de3\") " Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.217006 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e31871f-729e-4b67-98d0-96973ea90de3-kube-api-access-qcfn7" (OuterVolumeSpecName: "kube-api-access-qcfn7") pod "0e31871f-729e-4b67-98d0-96973ea90de3" (UID: "0e31871f-729e-4b67-98d0-96973ea90de3"). InnerVolumeSpecName "kube-api-access-qcfn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.250454 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e31871f-729e-4b67-98d0-96973ea90de3" (UID: "0e31871f-729e-4b67-98d0-96973ea90de3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.258235 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e31871f-729e-4b67-98d0-96973ea90de3" (UID: "0e31871f-729e-4b67-98d0-96973ea90de3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.266015 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-config" (OuterVolumeSpecName: "config") pod "0e31871f-729e-4b67-98d0-96973ea90de3" (UID: "0e31871f-729e-4b67-98d0-96973ea90de3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.268718 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e31871f-729e-4b67-98d0-96973ea90de3" (UID: "0e31871f-729e-4b67-98d0-96973ea90de3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.311874 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcfn7\" (UniqueName: \"kubernetes.io/projected/0e31871f-729e-4b67-98d0-96973ea90de3-kube-api-access-qcfn7\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.312068 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.312149 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.312213 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:49 crc kubenswrapper[4931]: I0130 06:32:49.312289 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e31871f-729e-4b67-98d0-96973ea90de3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.075262 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666dc49759-6999t" event={"ID":"0e31871f-729e-4b67-98d0-96973ea90de3","Type":"ContainerDied","Data":"040800f5a3860baea453ed500ca3333fe258941fbbb6b519709bd70c1f55e9fe"} Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.075315 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666dc49759-6999t" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.077197 4931 scope.go:117] "RemoveContainer" containerID="287747b607e19df8febc584091bc73c9472de152802c4b7543aad2f9bb9038dd" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.102152 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666dc49759-6999t"] Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.111782 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666dc49759-6999t"] Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.113764 4931 scope.go:117] "RemoveContainer" containerID="89484ad9976e7f0f1e67abb8cfa05b476c12211570ced1863487804ae3932924" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.465526 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.536810 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcfr5\" (UniqueName: \"kubernetes.io/projected/26e6e702-ef29-49bd-836a-f46b2abd51cc-kube-api-access-zcfr5\") pod \"26e6e702-ef29-49bd-836a-f46b2abd51cc\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.536898 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-combined-ca-bundle\") pod \"26e6e702-ef29-49bd-836a-f46b2abd51cc\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.536925 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-config-data\") pod \"26e6e702-ef29-49bd-836a-f46b2abd51cc\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.536953 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-fernet-keys\") pod \"26e6e702-ef29-49bd-836a-f46b2abd51cc\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.536984 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-scripts\") pod \"26e6e702-ef29-49bd-836a-f46b2abd51cc\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.537006 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-credential-keys\") pod \"26e6e702-ef29-49bd-836a-f46b2abd51cc\" (UID: \"26e6e702-ef29-49bd-836a-f46b2abd51cc\") " Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.541239 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "26e6e702-ef29-49bd-836a-f46b2abd51cc" (UID: "26e6e702-ef29-49bd-836a-f46b2abd51cc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.542664 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e6e702-ef29-49bd-836a-f46b2abd51cc-kube-api-access-zcfr5" (OuterVolumeSpecName: "kube-api-access-zcfr5") pod "26e6e702-ef29-49bd-836a-f46b2abd51cc" (UID: "26e6e702-ef29-49bd-836a-f46b2abd51cc"). InnerVolumeSpecName "kube-api-access-zcfr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.545901 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "26e6e702-ef29-49bd-836a-f46b2abd51cc" (UID: "26e6e702-ef29-49bd-836a-f46b2abd51cc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.547474 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-scripts" (OuterVolumeSpecName: "scripts") pod "26e6e702-ef29-49bd-836a-f46b2abd51cc" (UID: "26e6e702-ef29-49bd-836a-f46b2abd51cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.565555 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-config-data" (OuterVolumeSpecName: "config-data") pod "26e6e702-ef29-49bd-836a-f46b2abd51cc" (UID: "26e6e702-ef29-49bd-836a-f46b2abd51cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.572131 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26e6e702-ef29-49bd-836a-f46b2abd51cc" (UID: "26e6e702-ef29-49bd-836a-f46b2abd51cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.639324 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcfr5\" (UniqueName: \"kubernetes.io/projected/26e6e702-ef29-49bd-836a-f46b2abd51cc-kube-api-access-zcfr5\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.639371 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.639381 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.639390 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.639559 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:50 crc kubenswrapper[4931]: I0130 06:32:50.639572 4931 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/26e6e702-ef29-49bd-836a-f46b2abd51cc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.085989 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dj4rz" event={"ID":"26e6e702-ef29-49bd-836a-f46b2abd51cc","Type":"ContainerDied","Data":"5059fe1de470f8b35ea58caed468c9ea16c78ba228c422fecb4fd0e2aa96cc2d"} Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.086346 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5059fe1de470f8b35ea58caed468c9ea16c78ba228c422fecb4fd0e2aa96cc2d" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.086166 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dj4rz" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.437192 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e31871f-729e-4b67-98d0-96973ea90de3" path="/var/lib/kubelet/pods/0e31871f-729e-4b67-98d0-96973ea90de3/volumes" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.571735 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6bc679c867-wth9b"] Jan 30 06:32:51 crc kubenswrapper[4931]: E0130 06:32:51.572199 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e31871f-729e-4b67-98d0-96973ea90de3" containerName="dnsmasq-dns" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.572228 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e31871f-729e-4b67-98d0-96973ea90de3" containerName="dnsmasq-dns" Jan 30 06:32:51 crc kubenswrapper[4931]: E0130 06:32:51.572279 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e31871f-729e-4b67-98d0-96973ea90de3" containerName="init" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.572290 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e31871f-729e-4b67-98d0-96973ea90de3" containerName="init" Jan 30 06:32:51 crc kubenswrapper[4931]: E0130 06:32:51.572310 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e6e702-ef29-49bd-836a-f46b2abd51cc" containerName="keystone-bootstrap" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.572324 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e6e702-ef29-49bd-836a-f46b2abd51cc" containerName="keystone-bootstrap" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.572602 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e31871f-729e-4b67-98d0-96973ea90de3" containerName="dnsmasq-dns" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.572636 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e6e702-ef29-49bd-836a-f46b2abd51cc" containerName="keystone-bootstrap" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.573485 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.576603 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lg2q7" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.576724 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.576825 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.579547 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.588853 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6bc679c867-wth9b"] Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.654547 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-combined-ca-bundle\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.654610 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-fernet-keys\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.654635 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-config-data\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.654658 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crh9k\" (UniqueName: \"kubernetes.io/projected/18835617-9ad2-4502-bbda-d4ac538081bd-kube-api-access-crh9k\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.654689 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-scripts\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.654730 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-credential-keys\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.756465 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-credential-keys\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.756528 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-combined-ca-bundle\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.756565 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-fernet-keys\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.756587 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-config-data\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.756610 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crh9k\" (UniqueName: \"kubernetes.io/projected/18835617-9ad2-4502-bbda-d4ac538081bd-kube-api-access-crh9k\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.756640 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-scripts\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.760447 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-fernet-keys\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.761330 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-scripts\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.761513 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-combined-ca-bundle\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.761578 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-credential-keys\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.761683 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18835617-9ad2-4502-bbda-d4ac538081bd-config-data\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.778046 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crh9k\" (UniqueName: \"kubernetes.io/projected/18835617-9ad2-4502-bbda-d4ac538081bd-kube-api-access-crh9k\") pod \"keystone-6bc679c867-wth9b\" (UID: \"18835617-9ad2-4502-bbda-d4ac538081bd\") " pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:51 crc kubenswrapper[4931]: I0130 06:32:51.891180 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:52 crc kubenswrapper[4931]: I0130 06:32:52.359560 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6bc679c867-wth9b"] Jan 30 06:32:53 crc kubenswrapper[4931]: I0130 06:32:53.107283 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6bc679c867-wth9b" event={"ID":"18835617-9ad2-4502-bbda-d4ac538081bd","Type":"ContainerStarted","Data":"e36b0ed7b56ce182ac97bba3fef122763c524704b25abc5f048a9cb86089ee31"} Jan 30 06:32:53 crc kubenswrapper[4931]: I0130 06:32:53.107553 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6bc679c867-wth9b" event={"ID":"18835617-9ad2-4502-bbda-d4ac538081bd","Type":"ContainerStarted","Data":"804b21fe14aafecc45d89562e1b5cfdd1acc840acca33979e160c0fd6b4599e9"} Jan 30 06:32:53 crc kubenswrapper[4931]: I0130 06:32:53.107593 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:32:53 crc kubenswrapper[4931]: I0130 06:32:53.134656 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6bc679c867-wth9b" podStartSLOduration=2.134629321 podStartE2EDuration="2.134629321s" podCreationTimestamp="2026-01-30 06:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:32:53.12588998 +0000 UTC m=+5108.495800247" watchObservedRunningTime="2026-01-30 06:32:53.134629321 +0000 UTC m=+5108.504539588" Jan 30 06:32:53 crc kubenswrapper[4931]: I0130 06:32:53.422802 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:32:53 crc kubenswrapper[4931]: E0130 06:32:53.422975 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:33:04 crc kubenswrapper[4931]: I0130 06:33:04.422744 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:33:04 crc kubenswrapper[4931]: E0130 06:33:04.425276 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:33:15 crc kubenswrapper[4931]: I0130 06:33:15.434946 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:33:15 crc kubenswrapper[4931]: E0130 06:33:15.437478 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:33:23 crc kubenswrapper[4931]: I0130 06:33:23.259680 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6bc679c867-wth9b" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.733325 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.735292 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.737706 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.738044 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.740059 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8ssd9" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.746410 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.815044 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59zl\" (UniqueName: \"kubernetes.io/projected/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-kube-api-access-c59zl\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.815104 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config-secret\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.815140 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.917249 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c59zl\" (UniqueName: \"kubernetes.io/projected/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-kube-api-access-c59zl\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.917311 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config-secret\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.917357 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.918408 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.934828 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config-secret\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:26 crc kubenswrapper[4931]: I0130 06:33:26.949975 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c59zl\" (UniqueName: \"kubernetes.io/projected/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-kube-api-access-c59zl\") pod \"openstackclient\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " pod="openstack/openstackclient" Jan 30 06:33:27 crc kubenswrapper[4931]: I0130 06:33:27.058531 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:33:27 crc kubenswrapper[4931]: I0130 06:33:27.539515 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 06:33:27 crc kubenswrapper[4931]: W0130 06:33:27.560874 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod459f1ff6_e3cb_45a8_9a4a_0e24e7881407.slice/crio-f119add10e3e59ef80f35a8b0a1d9d2e4bc8283a75ba76bc8e9aac00049c6ac6 WatchSource:0}: Error finding container f119add10e3e59ef80f35a8b0a1d9d2e4bc8283a75ba76bc8e9aac00049c6ac6: Status 404 returned error can't find the container with id f119add10e3e59ef80f35a8b0a1d9d2e4bc8283a75ba76bc8e9aac00049c6ac6 Jan 30 06:33:28 crc kubenswrapper[4931]: I0130 06:33:28.480767 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"459f1ff6-e3cb-45a8-9a4a-0e24e7881407","Type":"ContainerStarted","Data":"73102981ee7cbbfca21cfe57a2e7dc11e7bcd50d611864190f31f741ca04bd1e"} Jan 30 06:33:28 crc kubenswrapper[4931]: I0130 06:33:28.481145 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"459f1ff6-e3cb-45a8-9a4a-0e24e7881407","Type":"ContainerStarted","Data":"f119add10e3e59ef80f35a8b0a1d9d2e4bc8283a75ba76bc8e9aac00049c6ac6"} Jan 30 06:33:28 crc kubenswrapper[4931]: I0130 06:33:28.515508 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.515482197 podStartE2EDuration="2.515482197s" podCreationTimestamp="2026-01-30 06:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:33:28.504012608 +0000 UTC m=+5143.873922905" watchObservedRunningTime="2026-01-30 06:33:28.515482197 +0000 UTC m=+5143.885392494" Jan 30 06:33:30 crc kubenswrapper[4931]: I0130 06:33:30.422630 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:33:30 crc kubenswrapper[4931]: E0130 06:33:30.423544 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:33:41 crc kubenswrapper[4931]: I0130 06:33:41.421957 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:33:41 crc kubenswrapper[4931]: E0130 06:33:41.423026 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:33:55 crc kubenswrapper[4931]: I0130 06:33:55.431894 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:33:55 crc kubenswrapper[4931]: E0130 06:33:55.432734 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:33:59 crc kubenswrapper[4931]: I0130 06:33:59.089859 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4x52j"] Jan 30 06:33:59 crc kubenswrapper[4931]: I0130 06:33:59.104129 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4x52j"] Jan 30 06:33:59 crc kubenswrapper[4931]: I0130 06:33:59.447652 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56513a2a-14aa-4055-8b35-de5c272faab9" path="/var/lib/kubelet/pods/56513a2a-14aa-4055-8b35-de5c272faab9/volumes" Jan 30 06:34:10 crc kubenswrapper[4931]: I0130 06:34:10.422355 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:34:10 crc kubenswrapper[4931]: E0130 06:34:10.423641 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:34:21 crc kubenswrapper[4931]: I0130 06:34:21.422962 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:34:21 crc kubenswrapper[4931]: E0130 06:34:21.424000 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:34:34 crc kubenswrapper[4931]: I0130 06:34:34.422038 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:34:34 crc kubenswrapper[4931]: E0130 06:34:34.422783 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:34:49 crc kubenswrapper[4931]: I0130 06:34:49.424753 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:34:49 crc kubenswrapper[4931]: E0130 06:34:49.425800 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:34:54 crc kubenswrapper[4931]: I0130 06:34:54.111028 4931 scope.go:117] "RemoveContainer" containerID="88feac8d2b5d033c066731d3fa1d66cc34b935b576cc56423d76770840e869ad" Jan 30 06:35:00 crc kubenswrapper[4931]: I0130 06:35:00.422497 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:35:00 crc kubenswrapper[4931]: E0130 06:35:00.425306 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:35:07 crc kubenswrapper[4931]: I0130 06:35:07.990230 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-cxbxk"] Jan 30 06:35:07 crc kubenswrapper[4931]: I0130 06:35:07.991735 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.001239 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cxbxk"] Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.086619 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-64f9-account-create-update-sm7kp"] Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.087887 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.091257 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.096188 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vf22\" (UniqueName: \"kubernetes.io/projected/7e8b686f-89e9-4561-b4da-73c3087f1913-kube-api-access-4vf22\") pod \"barbican-db-create-cxbxk\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.096262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8b686f-89e9-4561-b4da-73c3087f1913-operator-scripts\") pod \"barbican-db-create-cxbxk\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.098578 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-64f9-account-create-update-sm7kp"] Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.198168 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vf22\" (UniqueName: \"kubernetes.io/projected/7e8b686f-89e9-4561-b4da-73c3087f1913-kube-api-access-4vf22\") pod \"barbican-db-create-cxbxk\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.198223 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8b686f-89e9-4561-b4da-73c3087f1913-operator-scripts\") pod \"barbican-db-create-cxbxk\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.198283 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cl5\" (UniqueName: \"kubernetes.io/projected/f1237d07-19d9-47bb-8fb8-42e905fcc41b-kube-api-access-x7cl5\") pod \"barbican-64f9-account-create-update-sm7kp\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.198301 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1237d07-19d9-47bb-8fb8-42e905fcc41b-operator-scripts\") pod \"barbican-64f9-account-create-update-sm7kp\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.198953 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8b686f-89e9-4561-b4da-73c3087f1913-operator-scripts\") pod \"barbican-db-create-cxbxk\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.221092 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vf22\" (UniqueName: \"kubernetes.io/projected/7e8b686f-89e9-4561-b4da-73c3087f1913-kube-api-access-4vf22\") pod \"barbican-db-create-cxbxk\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.299623 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7cl5\" (UniqueName: \"kubernetes.io/projected/f1237d07-19d9-47bb-8fb8-42e905fcc41b-kube-api-access-x7cl5\") pod \"barbican-64f9-account-create-update-sm7kp\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.299673 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1237d07-19d9-47bb-8fb8-42e905fcc41b-operator-scripts\") pod \"barbican-64f9-account-create-update-sm7kp\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.300473 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1237d07-19d9-47bb-8fb8-42e905fcc41b-operator-scripts\") pod \"barbican-64f9-account-create-update-sm7kp\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.309258 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.317167 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7cl5\" (UniqueName: \"kubernetes.io/projected/f1237d07-19d9-47bb-8fb8-42e905fcc41b-kube-api-access-x7cl5\") pod \"barbican-64f9-account-create-update-sm7kp\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.416883 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.717740 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-cxbxk"] Jan 30 06:35:08 crc kubenswrapper[4931]: I0130 06:35:08.944945 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-64f9-account-create-update-sm7kp"] Jan 30 06:35:08 crc kubenswrapper[4931]: W0130 06:35:08.945754 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1237d07_19d9_47bb_8fb8_42e905fcc41b.slice/crio-8b58d2af4ffc0ac411e7a90473fbcac724986ee3773fc8ecb3af3a66e1ccd3b2 WatchSource:0}: Error finding container 8b58d2af4ffc0ac411e7a90473fbcac724986ee3773fc8ecb3af3a66e1ccd3b2: Status 404 returned error can't find the container with id 8b58d2af4ffc0ac411e7a90473fbcac724986ee3773fc8ecb3af3a66e1ccd3b2 Jan 30 06:35:09 crc kubenswrapper[4931]: E0130 06:35:09.161957 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e8b686f_89e9_4561_b4da_73c3087f1913.slice/crio-d12d2fc2afba982df405694f131f75a0e5433ce67ce82580ab99cf0746dfdbc2.scope\": RecentStats: unable to find data in memory cache]" Jan 30 06:35:09 crc kubenswrapper[4931]: I0130 06:35:09.559934 4931 generic.go:334] "Generic (PLEG): container finished" podID="f1237d07-19d9-47bb-8fb8-42e905fcc41b" containerID="328e8eda0559ed6f531366255d38e56b8621e4607eb9add0633123842cfdda68" exitCode=0 Jan 30 06:35:09 crc kubenswrapper[4931]: I0130 06:35:09.560019 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-64f9-account-create-update-sm7kp" event={"ID":"f1237d07-19d9-47bb-8fb8-42e905fcc41b","Type":"ContainerDied","Data":"328e8eda0559ed6f531366255d38e56b8621e4607eb9add0633123842cfdda68"} Jan 30 06:35:09 crc kubenswrapper[4931]: I0130 06:35:09.560412 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-64f9-account-create-update-sm7kp" event={"ID":"f1237d07-19d9-47bb-8fb8-42e905fcc41b","Type":"ContainerStarted","Data":"8b58d2af4ffc0ac411e7a90473fbcac724986ee3773fc8ecb3af3a66e1ccd3b2"} Jan 30 06:35:09 crc kubenswrapper[4931]: I0130 06:35:09.570974 4931 generic.go:334] "Generic (PLEG): container finished" podID="7e8b686f-89e9-4561-b4da-73c3087f1913" containerID="d12d2fc2afba982df405694f131f75a0e5433ce67ce82580ab99cf0746dfdbc2" exitCode=0 Jan 30 06:35:09 crc kubenswrapper[4931]: I0130 06:35:09.571078 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cxbxk" event={"ID":"7e8b686f-89e9-4561-b4da-73c3087f1913","Type":"ContainerDied","Data":"d12d2fc2afba982df405694f131f75a0e5433ce67ce82580ab99cf0746dfdbc2"} Jan 30 06:35:09 crc kubenswrapper[4931]: I0130 06:35:09.571127 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cxbxk" event={"ID":"7e8b686f-89e9-4561-b4da-73c3087f1913","Type":"ContainerStarted","Data":"7ec2122d80a12a975a10746666145e6ab74df22454658140eeceda76eef19a0c"} Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.003404 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.008410 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.054082 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7cl5\" (UniqueName: \"kubernetes.io/projected/f1237d07-19d9-47bb-8fb8-42e905fcc41b-kube-api-access-x7cl5\") pod \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.054137 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8b686f-89e9-4561-b4da-73c3087f1913-operator-scripts\") pod \"7e8b686f-89e9-4561-b4da-73c3087f1913\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.054161 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1237d07-19d9-47bb-8fb8-42e905fcc41b-operator-scripts\") pod \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\" (UID: \"f1237d07-19d9-47bb-8fb8-42e905fcc41b\") " Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.054209 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vf22\" (UniqueName: \"kubernetes.io/projected/7e8b686f-89e9-4561-b4da-73c3087f1913-kube-api-access-4vf22\") pod \"7e8b686f-89e9-4561-b4da-73c3087f1913\" (UID: \"7e8b686f-89e9-4561-b4da-73c3087f1913\") " Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.059823 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e8b686f-89e9-4561-b4da-73c3087f1913-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e8b686f-89e9-4561-b4da-73c3087f1913" (UID: "7e8b686f-89e9-4561-b4da-73c3087f1913"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.059866 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1237d07-19d9-47bb-8fb8-42e905fcc41b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1237d07-19d9-47bb-8fb8-42e905fcc41b" (UID: "f1237d07-19d9-47bb-8fb8-42e905fcc41b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.062683 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8b686f-89e9-4561-b4da-73c3087f1913-kube-api-access-4vf22" (OuterVolumeSpecName: "kube-api-access-4vf22") pod "7e8b686f-89e9-4561-b4da-73c3087f1913" (UID: "7e8b686f-89e9-4561-b4da-73c3087f1913"). InnerVolumeSpecName "kube-api-access-4vf22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.070234 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1237d07-19d9-47bb-8fb8-42e905fcc41b-kube-api-access-x7cl5" (OuterVolumeSpecName: "kube-api-access-x7cl5") pod "f1237d07-19d9-47bb-8fb8-42e905fcc41b" (UID: "f1237d07-19d9-47bb-8fb8-42e905fcc41b"). InnerVolumeSpecName "kube-api-access-x7cl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.155969 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7cl5\" (UniqueName: \"kubernetes.io/projected/f1237d07-19d9-47bb-8fb8-42e905fcc41b-kube-api-access-x7cl5\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.156008 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e8b686f-89e9-4561-b4da-73c3087f1913-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.156021 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1237d07-19d9-47bb-8fb8-42e905fcc41b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.156035 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vf22\" (UniqueName: \"kubernetes.io/projected/7e8b686f-89e9-4561-b4da-73c3087f1913-kube-api-access-4vf22\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.591129 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-cxbxk" event={"ID":"7e8b686f-89e9-4561-b4da-73c3087f1913","Type":"ContainerDied","Data":"7ec2122d80a12a975a10746666145e6ab74df22454658140eeceda76eef19a0c"} Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.591559 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ec2122d80a12a975a10746666145e6ab74df22454658140eeceda76eef19a0c" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.591164 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-cxbxk" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.593204 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-64f9-account-create-update-sm7kp" event={"ID":"f1237d07-19d9-47bb-8fb8-42e905fcc41b","Type":"ContainerDied","Data":"8b58d2af4ffc0ac411e7a90473fbcac724986ee3773fc8ecb3af3a66e1ccd3b2"} Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.593260 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b58d2af4ffc0ac411e7a90473fbcac724986ee3773fc8ecb3af3a66e1ccd3b2" Jan 30 06:35:11 crc kubenswrapper[4931]: I0130 06:35:11.593268 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-64f9-account-create-update-sm7kp" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.373171 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zzqsh"] Jan 30 06:35:13 crc kubenswrapper[4931]: E0130 06:35:13.374277 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1237d07-19d9-47bb-8fb8-42e905fcc41b" containerName="mariadb-account-create-update" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.374304 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1237d07-19d9-47bb-8fb8-42e905fcc41b" containerName="mariadb-account-create-update" Jan 30 06:35:13 crc kubenswrapper[4931]: E0130 06:35:13.374338 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8b686f-89e9-4561-b4da-73c3087f1913" containerName="mariadb-database-create" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.374351 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8b686f-89e9-4561-b4da-73c3087f1913" containerName="mariadb-database-create" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.374991 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8b686f-89e9-4561-b4da-73c3087f1913" containerName="mariadb-database-create" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.375064 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1237d07-19d9-47bb-8fb8-42e905fcc41b" containerName="mariadb-account-create-update" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.377125 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.386142 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.386211 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dnzcg" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.389528 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zzqsh"] Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.413878 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-combined-ca-bundle\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.414276 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-db-sync-config-data\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.414400 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262pn\" (UniqueName: \"kubernetes.io/projected/60872807-e034-4844-9f79-8005640c308c-kube-api-access-262pn\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.515893 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-combined-ca-bundle\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.516292 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-db-sync-config-data\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.516409 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-262pn\" (UniqueName: \"kubernetes.io/projected/60872807-e034-4844-9f79-8005640c308c-kube-api-access-262pn\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.522412 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-combined-ca-bundle\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.522844 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-db-sync-config-data\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.532614 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-262pn\" (UniqueName: \"kubernetes.io/projected/60872807-e034-4844-9f79-8005640c308c-kube-api-access-262pn\") pod \"barbican-db-sync-zzqsh\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:13 crc kubenswrapper[4931]: I0130 06:35:13.714304 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:14 crc kubenswrapper[4931]: I0130 06:35:14.052398 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zzqsh"] Jan 30 06:35:14 crc kubenswrapper[4931]: I0130 06:35:14.619745 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzqsh" event={"ID":"60872807-e034-4844-9f79-8005640c308c","Type":"ContainerStarted","Data":"c1b6dac81e48bbee78ee0997dd94e0a4ebd87490b839272e84bc72df445ca206"} Jan 30 06:35:14 crc kubenswrapper[4931]: I0130 06:35:14.620041 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzqsh" event={"ID":"60872807-e034-4844-9f79-8005640c308c","Type":"ContainerStarted","Data":"63d64ee019ed4b571b958873b656eed687082260a1ff967bfdde1ccd255d06fc"} Jan 30 06:35:14 crc kubenswrapper[4931]: I0130 06:35:14.640727 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zzqsh" podStartSLOduration=1.640699692 podStartE2EDuration="1.640699692s" podCreationTimestamp="2026-01-30 06:35:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:14.637447978 +0000 UTC m=+5250.007358235" watchObservedRunningTime="2026-01-30 06:35:14.640699692 +0000 UTC m=+5250.010609969" Jan 30 06:35:15 crc kubenswrapper[4931]: I0130 06:35:15.434336 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:35:15 crc kubenswrapper[4931]: E0130 06:35:15.434598 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:35:15 crc kubenswrapper[4931]: I0130 06:35:15.630878 4931 generic.go:334] "Generic (PLEG): container finished" podID="60872807-e034-4844-9f79-8005640c308c" containerID="c1b6dac81e48bbee78ee0997dd94e0a4ebd87490b839272e84bc72df445ca206" exitCode=0 Jan 30 06:35:15 crc kubenswrapper[4931]: I0130 06:35:15.630917 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzqsh" event={"ID":"60872807-e034-4844-9f79-8005640c308c","Type":"ContainerDied","Data":"c1b6dac81e48bbee78ee0997dd94e0a4ebd87490b839272e84bc72df445ca206"} Jan 30 06:35:16 crc kubenswrapper[4931]: I0130 06:35:16.988332 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.070371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-db-sync-config-data\") pod \"60872807-e034-4844-9f79-8005640c308c\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.070467 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-combined-ca-bundle\") pod \"60872807-e034-4844-9f79-8005640c308c\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.070708 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-262pn\" (UniqueName: \"kubernetes.io/projected/60872807-e034-4844-9f79-8005640c308c-kube-api-access-262pn\") pod \"60872807-e034-4844-9f79-8005640c308c\" (UID: \"60872807-e034-4844-9f79-8005640c308c\") " Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.076140 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60872807-e034-4844-9f79-8005640c308c-kube-api-access-262pn" (OuterVolumeSpecName: "kube-api-access-262pn") pod "60872807-e034-4844-9f79-8005640c308c" (UID: "60872807-e034-4844-9f79-8005640c308c"). InnerVolumeSpecName "kube-api-access-262pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.084590 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "60872807-e034-4844-9f79-8005640c308c" (UID: "60872807-e034-4844-9f79-8005640c308c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.097008 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60872807-e034-4844-9f79-8005640c308c" (UID: "60872807-e034-4844-9f79-8005640c308c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.172946 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.172976 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-262pn\" (UniqueName: \"kubernetes.io/projected/60872807-e034-4844-9f79-8005640c308c-kube-api-access-262pn\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.172988 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60872807-e034-4844-9f79-8005640c308c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.657279 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzqsh" event={"ID":"60872807-e034-4844-9f79-8005640c308c","Type":"ContainerDied","Data":"63d64ee019ed4b571b958873b656eed687082260a1ff967bfdde1ccd255d06fc"} Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.657332 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d64ee019ed4b571b958873b656eed687082260a1ff967bfdde1ccd255d06fc" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.657336 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzqsh" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.879849 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-784d6f7789-45xt8"] Jan 30 06:35:17 crc kubenswrapper[4931]: E0130 06:35:17.880242 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60872807-e034-4844-9f79-8005640c308c" containerName="barbican-db-sync" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.880263 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="60872807-e034-4844-9f79-8005640c308c" containerName="barbican-db-sync" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.881196 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="60872807-e034-4844-9f79-8005640c308c" containerName="barbican-db-sync" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.888068 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.913056 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.915224 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-dnzcg" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.915359 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.939032 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-784d6f7789-45xt8"] Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.953805 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-67d8db4f6b-c2v48"] Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.955536 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:17 crc kubenswrapper[4931]: I0130 06:35:17.959903 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:17.998575 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-logs\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.001299 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-combined-ca-bundle\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.001352 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-config-data-custom\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.001634 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twf5p\" (UniqueName: \"kubernetes.io/projected/a72d7303-20af-4fe7-be58-962eaa52c31a-kube-api-access-twf5p\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.001792 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-config-data\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.001957 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-config-data-custom\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.002196 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72d7303-20af-4fe7-be58-962eaa52c31a-logs\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.002331 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-combined-ca-bundle\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.002449 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-config-data\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.002768 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgbvb\" (UniqueName: \"kubernetes.io/projected/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-kube-api-access-sgbvb\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.006948 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67d8db4f6b-c2v48"] Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.044400 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-669bb76d6c-ld8p5"] Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.054219 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.064275 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-669bb76d6c-ld8p5"] Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.070623 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-847c6776d8-4sw8x"] Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.075917 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.080236 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-847c6776d8-4sw8x"] Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.080669 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104667 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-combined-ca-bundle\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104709 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-config-data-custom\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104739 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twf5p\" (UniqueName: \"kubernetes.io/projected/a72d7303-20af-4fe7-be58-962eaa52c31a-kube-api-access-twf5p\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104772 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-config-data\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104797 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-config-data-custom\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104817 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4f9\" (UniqueName: \"kubernetes.io/projected/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-kube-api-access-4t4f9\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72d7303-20af-4fe7-be58-962eaa52c31a-logs\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104856 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-dns-svc\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104872 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-combined-ca-bundle\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104899 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-config-data\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104940 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-config\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104960 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-sb\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.104994 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgbvb\" (UniqueName: \"kubernetes.io/projected/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-kube-api-access-sgbvb\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.105025 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-nb\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.105044 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-logs\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.105459 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-logs\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.105813 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a72d7303-20af-4fe7-be58-962eaa52c31a-logs\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.108558 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-combined-ca-bundle\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.111016 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-combined-ca-bundle\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.111672 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-config-data\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.114532 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-config-data-custom\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.116510 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-config-data\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.126986 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a72d7303-20af-4fe7-be58-962eaa52c31a-config-data-custom\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.130802 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgbvb\" (UniqueName: \"kubernetes.io/projected/acbd1f75-958f-4fe5-8d52-f32c4d6c53f1-kube-api-access-sgbvb\") pod \"barbican-keystone-listener-67d8db4f6b-c2v48\" (UID: \"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1\") " pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.135476 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twf5p\" (UniqueName: \"kubernetes.io/projected/a72d7303-20af-4fe7-be58-962eaa52c31a-kube-api-access-twf5p\") pod \"barbican-worker-784d6f7789-45xt8\" (UID: \"a72d7303-20af-4fe7-be58-962eaa52c31a\") " pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206173 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-config-data\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206215 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t4f9\" (UniqueName: \"kubernetes.io/projected/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-kube-api-access-4t4f9\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206247 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-dns-svc\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206348 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-logs\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206387 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-combined-ca-bundle\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206434 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskh5\" (UniqueName: \"kubernetes.io/projected/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-kube-api-access-tskh5\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206563 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-config\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206587 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-sb\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206618 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-config-data-custom\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206808 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-nb\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.206960 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-dns-svc\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.207317 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-config\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.207726 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-nb\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.207764 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-sb\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.222912 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t4f9\" (UniqueName: \"kubernetes.io/projected/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-kube-api-access-4t4f9\") pod \"dnsmasq-dns-669bb76d6c-ld8p5\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.239339 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-784d6f7789-45xt8" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.285876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.308109 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-config-data\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.308166 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-logs\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.308187 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-combined-ca-bundle\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.308208 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tskh5\" (UniqueName: \"kubernetes.io/projected/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-kube-api-access-tskh5\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.308260 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-config-data-custom\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.309039 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-logs\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.312766 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-config-data-custom\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.312763 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-combined-ca-bundle\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.312864 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-config-data\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.329896 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tskh5\" (UniqueName: \"kubernetes.io/projected/62d9ff65-c8d2-413f-b323-47a1db5ea2ed-kube-api-access-tskh5\") pod \"barbican-api-847c6776d8-4sw8x\" (UID: \"62d9ff65-c8d2-413f-b323-47a1db5ea2ed\") " pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.370794 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.396381 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.748642 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-784d6f7789-45xt8"] Jan 30 06:35:18 crc kubenswrapper[4931]: W0130 06:35:18.776093 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda72d7303_20af_4fe7_be58_962eaa52c31a.slice/crio-9d6b3a10315ecaa3f9e38ab7113d934bf12afc216592235bbbc7e3070dbd10bf WatchSource:0}: Error finding container 9d6b3a10315ecaa3f9e38ab7113d934bf12afc216592235bbbc7e3070dbd10bf: Status 404 returned error can't find the container with id 9d6b3a10315ecaa3f9e38ab7113d934bf12afc216592235bbbc7e3070dbd10bf Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.832155 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-669bb76d6c-ld8p5"] Jan 30 06:35:18 crc kubenswrapper[4931]: W0130 06:35:18.837988 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66eeada2_dfe0_4ebc_af62_17af9f1ce15e.slice/crio-509c57c8ee870d1521b8c78081fe670bec4a3e4b66684ebdaba1d4543435f196 WatchSource:0}: Error finding container 509c57c8ee870d1521b8c78081fe670bec4a3e4b66684ebdaba1d4543435f196: Status 404 returned error can't find the container with id 509c57c8ee870d1521b8c78081fe670bec4a3e4b66684ebdaba1d4543435f196 Jan 30 06:35:18 crc kubenswrapper[4931]: I0130 06:35:18.887558 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67d8db4f6b-c2v48"] Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.081842 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-847c6776d8-4sw8x"] Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.681058 4931 generic.go:334] "Generic (PLEG): container finished" podID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerID="fac0f8070b365b5149eb56040af36ca71ef68ff65a5f0aef7d169c86e39479df" exitCode=0 Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.681105 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" event={"ID":"66eeada2-dfe0-4ebc-af62-17af9f1ce15e","Type":"ContainerDied","Data":"fac0f8070b365b5149eb56040af36ca71ef68ff65a5f0aef7d169c86e39479df"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.681455 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" event={"ID":"66eeada2-dfe0-4ebc-af62-17af9f1ce15e","Type":"ContainerStarted","Data":"509c57c8ee870d1521b8c78081fe670bec4a3e4b66684ebdaba1d4543435f196"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.684745 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-847c6776d8-4sw8x" event={"ID":"62d9ff65-c8d2-413f-b323-47a1db5ea2ed","Type":"ContainerStarted","Data":"8947c9d734a9979b06fa28d152fd4c11e04ce4dd699bc499c5572e840e332e86"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.684774 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-847c6776d8-4sw8x" event={"ID":"62d9ff65-c8d2-413f-b323-47a1db5ea2ed","Type":"ContainerStarted","Data":"502fc034db2d6646ab4a40fd5b988d6eb4f191d62fc0b96eb5a13ea1ffbbafbd"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.684783 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-847c6776d8-4sw8x" event={"ID":"62d9ff65-c8d2-413f-b323-47a1db5ea2ed","Type":"ContainerStarted","Data":"22ebe02ddcdace17f784f5c14374e84999e89084d51838f74e523549180e92d5"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.685714 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.685746 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.687862 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" event={"ID":"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1","Type":"ContainerStarted","Data":"fb1b28820a1408e8b9dd3f7446fa1adfd285d8b751ba27847ffcb7971471416c"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.687884 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" event={"ID":"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1","Type":"ContainerStarted","Data":"8c78e4e5c912e139faa60e53e6a80dc09c1d87c087efc4e5107e4fab1c5ea953"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.687894 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" event={"ID":"acbd1f75-958f-4fe5-8d52-f32c4d6c53f1","Type":"ContainerStarted","Data":"91165a486ae888d1f97850ffacd0dd3f78aad39b0075225e061af9f4bae73ca4"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.688868 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-784d6f7789-45xt8" event={"ID":"a72d7303-20af-4fe7-be58-962eaa52c31a","Type":"ContainerStarted","Data":"af9911eb0aac4d0427a84aac61b693548615547737bc7bcb9df3ccd210735a79"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.688883 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-784d6f7789-45xt8" event={"ID":"a72d7303-20af-4fe7-be58-962eaa52c31a","Type":"ContainerStarted","Data":"2f35f310b30d2e5188e1f321d43675adde801898c34e058b69ee5e5c218a7827"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.688892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-784d6f7789-45xt8" event={"ID":"a72d7303-20af-4fe7-be58-962eaa52c31a","Type":"ContainerStarted","Data":"9d6b3a10315ecaa3f9e38ab7113d934bf12afc216592235bbbc7e3070dbd10bf"} Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.767592 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-784d6f7789-45xt8" podStartSLOduration=2.767576769 podStartE2EDuration="2.767576769s" podCreationTimestamp="2026-01-30 06:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:19.745110945 +0000 UTC m=+5255.115021202" watchObservedRunningTime="2026-01-30 06:35:19.767576769 +0000 UTC m=+5255.137487026" Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.821918 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-67d8db4f6b-c2v48" podStartSLOduration=2.821902248 podStartE2EDuration="2.821902248s" podCreationTimestamp="2026-01-30 06:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:19.792686 +0000 UTC m=+5255.162596257" watchObservedRunningTime="2026-01-30 06:35:19.821902248 +0000 UTC m=+5255.191812505" Jan 30 06:35:19 crc kubenswrapper[4931]: I0130 06:35:19.824458 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-847c6776d8-4sw8x" podStartSLOduration=1.8244511719999998 podStartE2EDuration="1.824451172s" podCreationTimestamp="2026-01-30 06:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:19.814243269 +0000 UTC m=+5255.184153516" watchObservedRunningTime="2026-01-30 06:35:19.824451172 +0000 UTC m=+5255.194361429" Jan 30 06:35:20 crc kubenswrapper[4931]: I0130 06:35:20.701518 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" event={"ID":"66eeada2-dfe0-4ebc-af62-17af9f1ce15e","Type":"ContainerStarted","Data":"ae5834eb7aa94f3cb815f35e315bb6ed049e191afc33534b32776fd0debd1c4a"} Jan 30 06:35:20 crc kubenswrapper[4931]: I0130 06:35:20.702042 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:20 crc kubenswrapper[4931]: I0130 06:35:20.722057 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" podStartSLOduration=3.7220374720000002 podStartE2EDuration="3.722037472s" podCreationTimestamp="2026-01-30 06:35:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:20.718379047 +0000 UTC m=+5256.088289314" watchObservedRunningTime="2026-01-30 06:35:20.722037472 +0000 UTC m=+5256.091947729" Jan 30 06:35:27 crc kubenswrapper[4931]: I0130 06:35:27.422341 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:35:27 crc kubenswrapper[4931]: E0130 06:35:27.423667 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:35:28 crc kubenswrapper[4931]: I0130 06:35:28.372613 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:35:28 crc kubenswrapper[4931]: I0130 06:35:28.453373 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc57676c-79k7x"] Jan 30 06:35:28 crc kubenswrapper[4931]: I0130 06:35:28.453777 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cc57676c-79k7x" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerName="dnsmasq-dns" containerID="cri-o://d7ae0ff751a18019fedb3fe9938258f86e6d1759fbc3c988b2c850d94022c34d" gracePeriod=10 Jan 30 06:35:28 crc kubenswrapper[4931]: I0130 06:35:28.553882 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cc57676c-79k7x" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.24:5353: connect: connection refused" Jan 30 06:35:28 crc kubenswrapper[4931]: I0130 06:35:28.832777 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerID="d7ae0ff751a18019fedb3fe9938258f86e6d1759fbc3c988b2c850d94022c34d" exitCode=0 Jan 30 06:35:28 crc kubenswrapper[4931]: I0130 06:35:28.832825 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc57676c-79k7x" event={"ID":"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1","Type":"ContainerDied","Data":"d7ae0ff751a18019fedb3fe9938258f86e6d1759fbc3c988b2c850d94022c34d"} Jan 30 06:35:28 crc kubenswrapper[4931]: I0130 06:35:28.972033 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.060179 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-sb\") pod \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.060278 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-config\") pod \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.060365 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-dns-svc\") pod \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.060398 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-nb\") pod \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.060495 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd7x6\" (UniqueName: \"kubernetes.io/projected/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-kube-api-access-dd7x6\") pod \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\" (UID: \"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1\") " Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.065992 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-kube-api-access-dd7x6" (OuterVolumeSpecName: "kube-api-access-dd7x6") pod "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" (UID: "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1"). InnerVolumeSpecName "kube-api-access-dd7x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.107765 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" (UID: "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.112001 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-config" (OuterVolumeSpecName: "config") pod "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" (UID: "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.113491 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" (UID: "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.118946 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" (UID: "fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.162923 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.162972 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.162996 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd7x6\" (UniqueName: \"kubernetes.io/projected/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-kube-api-access-dd7x6\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.163014 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.163031 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:29 crc kubenswrapper[4931]: E0130 06:35:29.641390 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5868fc_e1b4_4f28_ba8d_6b0d9fad2db1.slice/crio-3961340eb18072ebf266ecd7fc39a9becdb1fb9de589cabfaaf5229e8191f71b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe5868fc_e1b4_4f28_ba8d_6b0d9fad2db1.slice\": RecentStats: unable to find data in memory cache]" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.754828 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.800226 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-847c6776d8-4sw8x" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.893366 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc57676c-79k7x" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.893778 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc57676c-79k7x" event={"ID":"fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1","Type":"ContainerDied","Data":"3961340eb18072ebf266ecd7fc39a9becdb1fb9de589cabfaaf5229e8191f71b"} Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.893808 4931 scope.go:117] "RemoveContainer" containerID="d7ae0ff751a18019fedb3fe9938258f86e6d1759fbc3c988b2c850d94022c34d" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.914579 4931 scope.go:117] "RemoveContainer" containerID="376254fe1540e485a58048ed599a0e1e2664491a5fb008a7c2926d2f38ef5153" Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.937993 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc57676c-79k7x"] Jan 30 06:35:29 crc kubenswrapper[4931]: I0130 06:35:29.944776 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cc57676c-79k7x"] Jan 30 06:35:31 crc kubenswrapper[4931]: I0130 06:35:31.434006 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" path="/var/lib/kubelet/pods/fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1/volumes" Jan 30 06:35:38 crc kubenswrapper[4931]: E0130 06:35:38.661643 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:42406->38.102.83.179:45103: write tcp 38.102.83.179:42406->38.102.83.179:45103: write: connection reset by peer Jan 30 06:35:41 crc kubenswrapper[4931]: I0130 06:35:41.422253 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.022478 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"7acc7dd93c5206c6ff8ebf3271a041083dbd9bcf0e00cc88a42d6c0b4c7429dd"} Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.508482 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-w8ln6"] Jan 30 06:35:42 crc kubenswrapper[4931]: E0130 06:35:42.511299 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerName="dnsmasq-dns" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.511323 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerName="dnsmasq-dns" Jan 30 06:35:42 crc kubenswrapper[4931]: E0130 06:35:42.511346 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerName="init" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.511355 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerName="init" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.511534 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5868fc-e1b4-4f28-ba8d-6b0d9fad2db1" containerName="dnsmasq-dns" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.512040 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.526118 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8ln6"] Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.619072 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ee04-account-create-update-5mxt8"] Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.620301 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.622135 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.627345 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ee04-account-create-update-5mxt8"] Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.644307 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/259088b5-f22c-4773-a526-5ce0d618a3c9-operator-scripts\") pod \"neutron-db-create-w8ln6\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.644390 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v7c9\" (UniqueName: \"kubernetes.io/projected/259088b5-f22c-4773-a526-5ce0d618a3c9-kube-api-access-6v7c9\") pod \"neutron-db-create-w8ln6\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.746255 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/259088b5-f22c-4773-a526-5ce0d618a3c9-operator-scripts\") pod \"neutron-db-create-w8ln6\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.746383 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v7c9\" (UniqueName: \"kubernetes.io/projected/259088b5-f22c-4773-a526-5ce0d618a3c9-kube-api-access-6v7c9\") pod \"neutron-db-create-w8ln6\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.746449 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnm2q\" (UniqueName: \"kubernetes.io/projected/da3ee3e2-1067-4d91-8780-4ee1442ddccd-kube-api-access-wnm2q\") pod \"neutron-ee04-account-create-update-5mxt8\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.746712 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da3ee3e2-1067-4d91-8780-4ee1442ddccd-operator-scripts\") pod \"neutron-ee04-account-create-update-5mxt8\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.747542 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/259088b5-f22c-4773-a526-5ce0d618a3c9-operator-scripts\") pod \"neutron-db-create-w8ln6\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.778240 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v7c9\" (UniqueName: \"kubernetes.io/projected/259088b5-f22c-4773-a526-5ce0d618a3c9-kube-api-access-6v7c9\") pod \"neutron-db-create-w8ln6\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.827986 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.848901 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnm2q\" (UniqueName: \"kubernetes.io/projected/da3ee3e2-1067-4d91-8780-4ee1442ddccd-kube-api-access-wnm2q\") pod \"neutron-ee04-account-create-update-5mxt8\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.849017 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da3ee3e2-1067-4d91-8780-4ee1442ddccd-operator-scripts\") pod \"neutron-ee04-account-create-update-5mxt8\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.850855 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da3ee3e2-1067-4d91-8780-4ee1442ddccd-operator-scripts\") pod \"neutron-ee04-account-create-update-5mxt8\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.882882 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnm2q\" (UniqueName: \"kubernetes.io/projected/da3ee3e2-1067-4d91-8780-4ee1442ddccd-kube-api-access-wnm2q\") pod \"neutron-ee04-account-create-update-5mxt8\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:42 crc kubenswrapper[4931]: I0130 06:35:42.945371 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:43 crc kubenswrapper[4931]: I0130 06:35:43.322392 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ee04-account-create-update-5mxt8"] Jan 30 06:35:43 crc kubenswrapper[4931]: W0130 06:35:43.324099 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda3ee3e2_1067_4d91_8780_4ee1442ddccd.slice/crio-d2fc1bc0b64af85fdcaae625f7a2cd72818f6a6c69c4e57aef2ea89ed03c3e40 WatchSource:0}: Error finding container d2fc1bc0b64af85fdcaae625f7a2cd72818f6a6c69c4e57aef2ea89ed03c3e40: Status 404 returned error can't find the container with id d2fc1bc0b64af85fdcaae625f7a2cd72818f6a6c69c4e57aef2ea89ed03c3e40 Jan 30 06:35:43 crc kubenswrapper[4931]: I0130 06:35:43.389713 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w8ln6"] Jan 30 06:35:43 crc kubenswrapper[4931]: W0130 06:35:43.398534 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod259088b5_f22c_4773_a526_5ce0d618a3c9.slice/crio-f98acd6750b37318271f0a4ec84a4e883f736cce807edcbf3eac1ea04f125692 WatchSource:0}: Error finding container f98acd6750b37318271f0a4ec84a4e883f736cce807edcbf3eac1ea04f125692: Status 404 returned error can't find the container with id f98acd6750b37318271f0a4ec84a4e883f736cce807edcbf3eac1ea04f125692 Jan 30 06:35:44 crc kubenswrapper[4931]: I0130 06:35:44.046493 4931 generic.go:334] "Generic (PLEG): container finished" podID="259088b5-f22c-4773-a526-5ce0d618a3c9" containerID="8641f9d89c670b316ae569c652c473fa47969340118c8804760552f9529867f0" exitCode=0 Jan 30 06:35:44 crc kubenswrapper[4931]: I0130 06:35:44.046592 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8ln6" event={"ID":"259088b5-f22c-4773-a526-5ce0d618a3c9","Type":"ContainerDied","Data":"8641f9d89c670b316ae569c652c473fa47969340118c8804760552f9529867f0"} Jan 30 06:35:44 crc kubenswrapper[4931]: I0130 06:35:44.046928 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8ln6" event={"ID":"259088b5-f22c-4773-a526-5ce0d618a3c9","Type":"ContainerStarted","Data":"f98acd6750b37318271f0a4ec84a4e883f736cce807edcbf3eac1ea04f125692"} Jan 30 06:35:44 crc kubenswrapper[4931]: I0130 06:35:44.051401 4931 generic.go:334] "Generic (PLEG): container finished" podID="da3ee3e2-1067-4d91-8780-4ee1442ddccd" containerID="2a367b7f63781dff8719e328044d9f7bfe39229339b2c9fd8828dc6b757b0a29" exitCode=0 Jan 30 06:35:44 crc kubenswrapper[4931]: I0130 06:35:44.051472 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ee04-account-create-update-5mxt8" event={"ID":"da3ee3e2-1067-4d91-8780-4ee1442ddccd","Type":"ContainerDied","Data":"2a367b7f63781dff8719e328044d9f7bfe39229339b2c9fd8828dc6b757b0a29"} Jan 30 06:35:44 crc kubenswrapper[4931]: I0130 06:35:44.051506 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ee04-account-create-update-5mxt8" event={"ID":"da3ee3e2-1067-4d91-8780-4ee1442ddccd","Type":"ContainerStarted","Data":"d2fc1bc0b64af85fdcaae625f7a2cd72818f6a6c69c4e57aef2ea89ed03c3e40"} Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.542881 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.559065 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.611871 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da3ee3e2-1067-4d91-8780-4ee1442ddccd-operator-scripts\") pod \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.611957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnm2q\" (UniqueName: \"kubernetes.io/projected/da3ee3e2-1067-4d91-8780-4ee1442ddccd-kube-api-access-wnm2q\") pod \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\" (UID: \"da3ee3e2-1067-4d91-8780-4ee1442ddccd\") " Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.612012 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v7c9\" (UniqueName: \"kubernetes.io/projected/259088b5-f22c-4773-a526-5ce0d618a3c9-kube-api-access-6v7c9\") pod \"259088b5-f22c-4773-a526-5ce0d618a3c9\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.612140 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/259088b5-f22c-4773-a526-5ce0d618a3c9-operator-scripts\") pod \"259088b5-f22c-4773-a526-5ce0d618a3c9\" (UID: \"259088b5-f22c-4773-a526-5ce0d618a3c9\") " Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.613050 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259088b5-f22c-4773-a526-5ce0d618a3c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "259088b5-f22c-4773-a526-5ce0d618a3c9" (UID: "259088b5-f22c-4773-a526-5ce0d618a3c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.613497 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da3ee3e2-1067-4d91-8780-4ee1442ddccd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da3ee3e2-1067-4d91-8780-4ee1442ddccd" (UID: "da3ee3e2-1067-4d91-8780-4ee1442ddccd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.618135 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da3ee3e2-1067-4d91-8780-4ee1442ddccd-kube-api-access-wnm2q" (OuterVolumeSpecName: "kube-api-access-wnm2q") pod "da3ee3e2-1067-4d91-8780-4ee1442ddccd" (UID: "da3ee3e2-1067-4d91-8780-4ee1442ddccd"). InnerVolumeSpecName "kube-api-access-wnm2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.618526 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259088b5-f22c-4773-a526-5ce0d618a3c9-kube-api-access-6v7c9" (OuterVolumeSpecName: "kube-api-access-6v7c9") pod "259088b5-f22c-4773-a526-5ce0d618a3c9" (UID: "259088b5-f22c-4773-a526-5ce0d618a3c9"). InnerVolumeSpecName "kube-api-access-6v7c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.714844 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da3ee3e2-1067-4d91-8780-4ee1442ddccd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.715202 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnm2q\" (UniqueName: \"kubernetes.io/projected/da3ee3e2-1067-4d91-8780-4ee1442ddccd-kube-api-access-wnm2q\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.715224 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v7c9\" (UniqueName: \"kubernetes.io/projected/259088b5-f22c-4773-a526-5ce0d618a3c9-kube-api-access-6v7c9\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:45 crc kubenswrapper[4931]: I0130 06:35:45.715243 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/259088b5-f22c-4773-a526-5ce0d618a3c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:46 crc kubenswrapper[4931]: I0130 06:35:46.076213 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w8ln6" Jan 30 06:35:46 crc kubenswrapper[4931]: I0130 06:35:46.076712 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w8ln6" event={"ID":"259088b5-f22c-4773-a526-5ce0d618a3c9","Type":"ContainerDied","Data":"f98acd6750b37318271f0a4ec84a4e883f736cce807edcbf3eac1ea04f125692"} Jan 30 06:35:46 crc kubenswrapper[4931]: I0130 06:35:46.076778 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f98acd6750b37318271f0a4ec84a4e883f736cce807edcbf3eac1ea04f125692" Jan 30 06:35:46 crc kubenswrapper[4931]: I0130 06:35:46.078899 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ee04-account-create-update-5mxt8" event={"ID":"da3ee3e2-1067-4d91-8780-4ee1442ddccd","Type":"ContainerDied","Data":"d2fc1bc0b64af85fdcaae625f7a2cd72818f6a6c69c4e57aef2ea89ed03c3e40"} Jan 30 06:35:46 crc kubenswrapper[4931]: I0130 06:35:46.078936 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2fc1bc0b64af85fdcaae625f7a2cd72818f6a6c69c4e57aef2ea89ed03c3e40" Jan 30 06:35:46 crc kubenswrapper[4931]: I0130 06:35:46.079053 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ee04-account-create-update-5mxt8" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.820081 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vxl4f"] Jan 30 06:35:47 crc kubenswrapper[4931]: E0130 06:35:47.820779 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259088b5-f22c-4773-a526-5ce0d618a3c9" containerName="mariadb-database-create" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.820795 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="259088b5-f22c-4773-a526-5ce0d618a3c9" containerName="mariadb-database-create" Jan 30 06:35:47 crc kubenswrapper[4931]: E0130 06:35:47.820812 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da3ee3e2-1067-4d91-8780-4ee1442ddccd" containerName="mariadb-account-create-update" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.820823 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="da3ee3e2-1067-4d91-8780-4ee1442ddccd" containerName="mariadb-account-create-update" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.820992 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="da3ee3e2-1067-4d91-8780-4ee1442ddccd" containerName="mariadb-account-create-update" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.821027 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="259088b5-f22c-4773-a526-5ce0d618a3c9" containerName="mariadb-database-create" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.821773 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.823973 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.824461 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-twcp7" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.824604 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.835210 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vxl4f"] Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.857864 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-config\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.857920 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-combined-ca-bundle\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.858128 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dqp\" (UniqueName: \"kubernetes.io/projected/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-kube-api-access-p2dqp\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.959390 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-config\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.959544 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-combined-ca-bundle\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.961159 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dqp\" (UniqueName: \"kubernetes.io/projected/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-kube-api-access-p2dqp\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.967131 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-combined-ca-bundle\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.973222 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-config\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:47 crc kubenswrapper[4931]: I0130 06:35:47.987415 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dqp\" (UniqueName: \"kubernetes.io/projected/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-kube-api-access-p2dqp\") pod \"neutron-db-sync-vxl4f\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:48 crc kubenswrapper[4931]: I0130 06:35:48.157753 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:48 crc kubenswrapper[4931]: I0130 06:35:48.646464 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vxl4f"] Jan 30 06:35:49 crc kubenswrapper[4931]: I0130 06:35:49.117232 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxl4f" event={"ID":"a2c67196-2e21-4ca1-81c6-ae1d0b68d461","Type":"ContainerStarted","Data":"a418c4ea4e534161dc6a2d3882bfda776aacf77311c0dc409345881787f7574b"} Jan 30 06:35:49 crc kubenswrapper[4931]: I0130 06:35:49.117283 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxl4f" event={"ID":"a2c67196-2e21-4ca1-81c6-ae1d0b68d461","Type":"ContainerStarted","Data":"3670fe8bc5a251e2f47fae900729e06863bb53e78c9777474cbdbd5c325116f9"} Jan 30 06:35:49 crc kubenswrapper[4931]: I0130 06:35:49.143108 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vxl4f" podStartSLOduration=2.143070398 podStartE2EDuration="2.143070398s" podCreationTimestamp="2026-01-30 06:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:49.140293058 +0000 UTC m=+5284.510203385" watchObservedRunningTime="2026-01-30 06:35:49.143070398 +0000 UTC m=+5284.512980695" Jan 30 06:35:53 crc kubenswrapper[4931]: I0130 06:35:53.158303 4931 generic.go:334] "Generic (PLEG): container finished" podID="a2c67196-2e21-4ca1-81c6-ae1d0b68d461" containerID="a418c4ea4e534161dc6a2d3882bfda776aacf77311c0dc409345881787f7574b" exitCode=0 Jan 30 06:35:53 crc kubenswrapper[4931]: I0130 06:35:53.158385 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxl4f" event={"ID":"a2c67196-2e21-4ca1-81c6-ae1d0b68d461","Type":"ContainerDied","Data":"a418c4ea4e534161dc6a2d3882bfda776aacf77311c0dc409345881787f7574b"} Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.527021 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.595990 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-combined-ca-bundle\") pod \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.596092 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-config\") pod \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.596147 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2dqp\" (UniqueName: \"kubernetes.io/projected/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-kube-api-access-p2dqp\") pod \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\" (UID: \"a2c67196-2e21-4ca1-81c6-ae1d0b68d461\") " Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.602706 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-kube-api-access-p2dqp" (OuterVolumeSpecName: "kube-api-access-p2dqp") pod "a2c67196-2e21-4ca1-81c6-ae1d0b68d461" (UID: "a2c67196-2e21-4ca1-81c6-ae1d0b68d461"). InnerVolumeSpecName "kube-api-access-p2dqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.622338 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2c67196-2e21-4ca1-81c6-ae1d0b68d461" (UID: "a2c67196-2e21-4ca1-81c6-ae1d0b68d461"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.622855 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-config" (OuterVolumeSpecName: "config") pod "a2c67196-2e21-4ca1-81c6-ae1d0b68d461" (UID: "a2c67196-2e21-4ca1-81c6-ae1d0b68d461"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.697440 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.697480 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:54 crc kubenswrapper[4931]: I0130 06:35:54.697494 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2dqp\" (UniqueName: \"kubernetes.io/projected/a2c67196-2e21-4ca1-81c6-ae1d0b68d461-kube-api-access-p2dqp\") on node \"crc\" DevicePath \"\"" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.183531 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vxl4f" event={"ID":"a2c67196-2e21-4ca1-81c6-ae1d0b68d461","Type":"ContainerDied","Data":"3670fe8bc5a251e2f47fae900729e06863bb53e78c9777474cbdbd5c325116f9"} Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.184020 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3670fe8bc5a251e2f47fae900729e06863bb53e78c9777474cbdbd5c325116f9" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.183708 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vxl4f" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.484707 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d84887cc5-thvpx"] Jan 30 06:35:55 crc kubenswrapper[4931]: E0130 06:35:55.485082 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c67196-2e21-4ca1-81c6-ae1d0b68d461" containerName="neutron-db-sync" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.485097 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c67196-2e21-4ca1-81c6-ae1d0b68d461" containerName="neutron-db-sync" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.485255 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c67196-2e21-4ca1-81c6-ae1d0b68d461" containerName="neutron-db-sync" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.486089 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.511404 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-config\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.511596 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-nb\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.511665 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-dns-svc\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.511712 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7bs\" (UniqueName: \"kubernetes.io/projected/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-kube-api-access-qv7bs\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.511750 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-sb\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.516487 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d84887cc5-thvpx"] Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.529208 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78978fdd5c-pqg87"] Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.530606 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.533090 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-twcp7" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.533696 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.533835 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.537966 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78978fdd5c-pqg87"] Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.614210 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-dns-svc\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.614365 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7bs\" (UniqueName: \"kubernetes.io/projected/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-kube-api-access-qv7bs\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.615186 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-dns-svc\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.615503 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-config\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.615570 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-sb\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.616201 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-sb\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.616259 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-config\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.616826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-config\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.616890 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-nb\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.617487 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-nb\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.617558 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-combined-ca-bundle\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.617662 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkv6z\" (UniqueName: \"kubernetes.io/projected/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-kube-api-access-gkv6z\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.617710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-httpd-config\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.632330 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7bs\" (UniqueName: \"kubernetes.io/projected/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-kube-api-access-qv7bs\") pod \"dnsmasq-dns-d84887cc5-thvpx\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.719518 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-config\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.720011 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-combined-ca-bundle\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.720056 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkv6z\" (UniqueName: \"kubernetes.io/projected/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-kube-api-access-gkv6z\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.720074 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-httpd-config\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.722778 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-config\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.726119 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-httpd-config\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.726220 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-combined-ca-bundle\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.746808 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkv6z\" (UniqueName: \"kubernetes.io/projected/ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5-kube-api-access-gkv6z\") pod \"neutron-78978fdd5c-pqg87\" (UID: \"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5\") " pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.824733 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:35:55 crc kubenswrapper[4931]: I0130 06:35:55.847325 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:56 crc kubenswrapper[4931]: I0130 06:35:56.285313 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d84887cc5-thvpx"] Jan 30 06:35:56 crc kubenswrapper[4931]: I0130 06:35:56.477323 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78978fdd5c-pqg87"] Jan 30 06:35:56 crc kubenswrapper[4931]: W0130 06:35:56.482510 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef26e9fb_2e6d_4582_a140_1ebd8eebc9e5.slice/crio-4f58ba9c78a9aa618bd3163dd6eea495c1e01667778064d935f8061a0cbc231c WatchSource:0}: Error finding container 4f58ba9c78a9aa618bd3163dd6eea495c1e01667778064d935f8061a0cbc231c: Status 404 returned error can't find the container with id 4f58ba9c78a9aa618bd3163dd6eea495c1e01667778064d935f8061a0cbc231c Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.201175 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78978fdd5c-pqg87" event={"ID":"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5","Type":"ContainerStarted","Data":"5d3f791e0fbe47127d8892cc054af5a0452bbcb2d8edd1baaab73a68e999b293"} Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.201936 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78978fdd5c-pqg87" event={"ID":"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5","Type":"ContainerStarted","Data":"0051ae7f7b4dee62dc46439aad8d97f329741b7b6d43d866d13ee09e484ea0b2"} Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.201980 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78978fdd5c-pqg87" event={"ID":"ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5","Type":"ContainerStarted","Data":"4f58ba9c78a9aa618bd3163dd6eea495c1e01667778064d935f8061a0cbc231c"} Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.201999 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.217993 4931 generic.go:334] "Generic (PLEG): container finished" podID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerID="753bd88c07189bffc8d69c9bf6a34e6d86af3f56c735e6c62d25cd5e5e4da562" exitCode=0 Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.218220 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" event={"ID":"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5","Type":"ContainerDied","Data":"753bd88c07189bffc8d69c9bf6a34e6d86af3f56c735e6c62d25cd5e5e4da562"} Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.218300 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" event={"ID":"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5","Type":"ContainerStarted","Data":"48bcc562af333fbf9303da09ed90c070b458fec2e34b83616d91ef56cccabb57"} Jan 30 06:35:57 crc kubenswrapper[4931]: I0130 06:35:57.248635 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78978fdd5c-pqg87" podStartSLOduration=2.248608211 podStartE2EDuration="2.248608211s" podCreationTimestamp="2026-01-30 06:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:57.225680583 +0000 UTC m=+5292.595590850" watchObservedRunningTime="2026-01-30 06:35:57.248608211 +0000 UTC m=+5292.618518468" Jan 30 06:35:58 crc kubenswrapper[4931]: I0130 06:35:58.238834 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" event={"ID":"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5","Type":"ContainerStarted","Data":"9a53ddc494ac40f2cd235561690db17e6fa36bf1d84c504bb1475d76a3081caf"} Jan 30 06:35:58 crc kubenswrapper[4931]: I0130 06:35:58.270918 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" podStartSLOduration=3.270900391 podStartE2EDuration="3.270900391s" podCreationTimestamp="2026-01-30 06:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:35:58.270223101 +0000 UTC m=+5293.640133388" watchObservedRunningTime="2026-01-30 06:35:58.270900391 +0000 UTC m=+5293.640810648" Jan 30 06:35:59 crc kubenswrapper[4931]: I0130 06:35:59.249203 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:36:05 crc kubenswrapper[4931]: I0130 06:36:05.825585 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:36:05 crc kubenswrapper[4931]: I0130 06:36:05.890571 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-669bb76d6c-ld8p5"] Jan 30 06:36:05 crc kubenswrapper[4931]: I0130 06:36:05.890813 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" podUID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerName="dnsmasq-dns" containerID="cri-o://ae5834eb7aa94f3cb815f35e315bb6ed049e191afc33534b32776fd0debd1c4a" gracePeriod=10 Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.347278 4931 generic.go:334] "Generic (PLEG): container finished" podID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerID="ae5834eb7aa94f3cb815f35e315bb6ed049e191afc33534b32776fd0debd1c4a" exitCode=0 Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.347362 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" event={"ID":"66eeada2-dfe0-4ebc-af62-17af9f1ce15e","Type":"ContainerDied","Data":"ae5834eb7aa94f3cb815f35e315bb6ed049e191afc33534b32776fd0debd1c4a"} Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.406257 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.519141 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-dns-svc\") pod \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.519206 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-nb\") pod \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.519235 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-sb\") pod \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.519329 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-config\") pod \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.519354 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t4f9\" (UniqueName: \"kubernetes.io/projected/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-kube-api-access-4t4f9\") pod \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\" (UID: \"66eeada2-dfe0-4ebc-af62-17af9f1ce15e\") " Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.526671 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-kube-api-access-4t4f9" (OuterVolumeSpecName: "kube-api-access-4t4f9") pod "66eeada2-dfe0-4ebc-af62-17af9f1ce15e" (UID: "66eeada2-dfe0-4ebc-af62-17af9f1ce15e"). InnerVolumeSpecName "kube-api-access-4t4f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.577787 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "66eeada2-dfe0-4ebc-af62-17af9f1ce15e" (UID: "66eeada2-dfe0-4ebc-af62-17af9f1ce15e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.578127 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "66eeada2-dfe0-4ebc-af62-17af9f1ce15e" (UID: "66eeada2-dfe0-4ebc-af62-17af9f1ce15e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.578807 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-config" (OuterVolumeSpecName: "config") pod "66eeada2-dfe0-4ebc-af62-17af9f1ce15e" (UID: "66eeada2-dfe0-4ebc-af62-17af9f1ce15e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.578997 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66eeada2-dfe0-4ebc-af62-17af9f1ce15e" (UID: "66eeada2-dfe0-4ebc-af62-17af9f1ce15e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.621019 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.621052 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t4f9\" (UniqueName: \"kubernetes.io/projected/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-kube-api-access-4t4f9\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.621065 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.621075 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:06 crc kubenswrapper[4931]: I0130 06:36:06.621085 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66eeada2-dfe0-4ebc-af62-17af9f1ce15e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:07 crc kubenswrapper[4931]: I0130 06:36:07.357534 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" event={"ID":"66eeada2-dfe0-4ebc-af62-17af9f1ce15e","Type":"ContainerDied","Data":"509c57c8ee870d1521b8c78081fe670bec4a3e4b66684ebdaba1d4543435f196"} Jan 30 06:36:07 crc kubenswrapper[4931]: I0130 06:36:07.357627 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-669bb76d6c-ld8p5" Jan 30 06:36:07 crc kubenswrapper[4931]: I0130 06:36:07.358079 4931 scope.go:117] "RemoveContainer" containerID="ae5834eb7aa94f3cb815f35e315bb6ed049e191afc33534b32776fd0debd1c4a" Jan 30 06:36:07 crc kubenswrapper[4931]: I0130 06:36:07.379883 4931 scope.go:117] "RemoveContainer" containerID="fac0f8070b365b5149eb56040af36ca71ef68ff65a5f0aef7d169c86e39479df" Jan 30 06:36:07 crc kubenswrapper[4931]: I0130 06:36:07.403222 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-669bb76d6c-ld8p5"] Jan 30 06:36:07 crc kubenswrapper[4931]: I0130 06:36:07.409160 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-669bb76d6c-ld8p5"] Jan 30 06:36:07 crc kubenswrapper[4931]: I0130 06:36:07.433134 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" path="/var/lib/kubelet/pods/66eeada2-dfe0-4ebc-af62-17af9f1ce15e/volumes" Jan 30 06:36:25 crc kubenswrapper[4931]: I0130 06:36:25.863005 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78978fdd5c-pqg87" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.121902 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fcgh6"] Jan 30 06:36:33 crc kubenswrapper[4931]: E0130 06:36:33.122868 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerName="init" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.122888 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerName="init" Jan 30 06:36:33 crc kubenswrapper[4931]: E0130 06:36:33.122909 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerName="dnsmasq-dns" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.122916 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerName="dnsmasq-dns" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.123108 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="66eeada2-dfe0-4ebc-af62-17af9f1ce15e" containerName="dnsmasq-dns" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.123741 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.130623 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fcgh6"] Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.227999 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8be9-account-create-update-4qptt"] Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.229339 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.231044 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.245332 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8be9-account-create-update-4qptt"] Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.247337 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5a82c2-728c-40a6-83b0-37ba70d84931-operator-scripts\") pod \"glance-db-create-fcgh6\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.247552 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjr9\" (UniqueName: \"kubernetes.io/projected/fe5a82c2-728c-40a6-83b0-37ba70d84931-kube-api-access-wrjr9\") pod \"glance-db-create-fcgh6\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.349495 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ttz\" (UniqueName: \"kubernetes.io/projected/f6cc38ea-1412-4e17-9c74-779b7c6d701c-kube-api-access-76ttz\") pod \"glance-8be9-account-create-update-4qptt\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.349567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5a82c2-728c-40a6-83b0-37ba70d84931-operator-scripts\") pod \"glance-db-create-fcgh6\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.349732 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cc38ea-1412-4e17-9c74-779b7c6d701c-operator-scripts\") pod \"glance-8be9-account-create-update-4qptt\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.349882 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjr9\" (UniqueName: \"kubernetes.io/projected/fe5a82c2-728c-40a6-83b0-37ba70d84931-kube-api-access-wrjr9\") pod \"glance-db-create-fcgh6\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.350702 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5a82c2-728c-40a6-83b0-37ba70d84931-operator-scripts\") pod \"glance-db-create-fcgh6\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.373297 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjr9\" (UniqueName: \"kubernetes.io/projected/fe5a82c2-728c-40a6-83b0-37ba70d84931-kube-api-access-wrjr9\") pod \"glance-db-create-fcgh6\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.451684 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76ttz\" (UniqueName: \"kubernetes.io/projected/f6cc38ea-1412-4e17-9c74-779b7c6d701c-kube-api-access-76ttz\") pod \"glance-8be9-account-create-update-4qptt\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.451772 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cc38ea-1412-4e17-9c74-779b7c6d701c-operator-scripts\") pod \"glance-8be9-account-create-update-4qptt\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.452652 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cc38ea-1412-4e17-9c74-779b7c6d701c-operator-scripts\") pod \"glance-8be9-account-create-update-4qptt\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.476791 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76ttz\" (UniqueName: \"kubernetes.io/projected/f6cc38ea-1412-4e17-9c74-779b7c6d701c-kube-api-access-76ttz\") pod \"glance-8be9-account-create-update-4qptt\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.490708 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.543662 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:33 crc kubenswrapper[4931]: I0130 06:36:33.979658 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fcgh6"] Jan 30 06:36:34 crc kubenswrapper[4931]: I0130 06:36:34.046598 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8be9-account-create-update-4qptt"] Jan 30 06:36:34 crc kubenswrapper[4931]: W0130 06:36:34.046846 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6cc38ea_1412_4e17_9c74_779b7c6d701c.slice/crio-8775712b1867d046021ff197ce3cb48cf5322c9e656c2529b4c751dd69d648f5 WatchSource:0}: Error finding container 8775712b1867d046021ff197ce3cb48cf5322c9e656c2529b4c751dd69d648f5: Status 404 returned error can't find the container with id 8775712b1867d046021ff197ce3cb48cf5322c9e656c2529b4c751dd69d648f5 Jan 30 06:36:34 crc kubenswrapper[4931]: I0130 06:36:34.623823 4931 generic.go:334] "Generic (PLEG): container finished" podID="f6cc38ea-1412-4e17-9c74-779b7c6d701c" containerID="b12392121e0278ef6aaee0ef2cb91f20ce791df236403c3611d10649bcb909d3" exitCode=0 Jan 30 06:36:34 crc kubenswrapper[4931]: I0130 06:36:34.623968 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8be9-account-create-update-4qptt" event={"ID":"f6cc38ea-1412-4e17-9c74-779b7c6d701c","Type":"ContainerDied","Data":"b12392121e0278ef6aaee0ef2cb91f20ce791df236403c3611d10649bcb909d3"} Jan 30 06:36:34 crc kubenswrapper[4931]: I0130 06:36:34.624465 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8be9-account-create-update-4qptt" event={"ID":"f6cc38ea-1412-4e17-9c74-779b7c6d701c","Type":"ContainerStarted","Data":"8775712b1867d046021ff197ce3cb48cf5322c9e656c2529b4c751dd69d648f5"} Jan 30 06:36:34 crc kubenswrapper[4931]: I0130 06:36:34.627166 4931 generic.go:334] "Generic (PLEG): container finished" podID="fe5a82c2-728c-40a6-83b0-37ba70d84931" containerID="4711e717af206225417ee23e6a5a6867fd0fca04b0c1bb798437c5d765e9f38b" exitCode=0 Jan 30 06:36:34 crc kubenswrapper[4931]: I0130 06:36:34.627228 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fcgh6" event={"ID":"fe5a82c2-728c-40a6-83b0-37ba70d84931","Type":"ContainerDied","Data":"4711e717af206225417ee23e6a5a6867fd0fca04b0c1bb798437c5d765e9f38b"} Jan 30 06:36:34 crc kubenswrapper[4931]: I0130 06:36:34.627260 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fcgh6" event={"ID":"fe5a82c2-728c-40a6-83b0-37ba70d84931","Type":"ContainerStarted","Data":"4b0033d6fb69375936e0004de30bd4a3ca48c1df256811c61ab55f5388f86d4c"} Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.115310 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.126906 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.205150 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76ttz\" (UniqueName: \"kubernetes.io/projected/f6cc38ea-1412-4e17-9c74-779b7c6d701c-kube-api-access-76ttz\") pod \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.205267 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrjr9\" (UniqueName: \"kubernetes.io/projected/fe5a82c2-728c-40a6-83b0-37ba70d84931-kube-api-access-wrjr9\") pod \"fe5a82c2-728c-40a6-83b0-37ba70d84931\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.205324 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cc38ea-1412-4e17-9c74-779b7c6d701c-operator-scripts\") pod \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\" (UID: \"f6cc38ea-1412-4e17-9c74-779b7c6d701c\") " Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.205451 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5a82c2-728c-40a6-83b0-37ba70d84931-operator-scripts\") pod \"fe5a82c2-728c-40a6-83b0-37ba70d84931\" (UID: \"fe5a82c2-728c-40a6-83b0-37ba70d84931\") " Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.206362 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe5a82c2-728c-40a6-83b0-37ba70d84931-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe5a82c2-728c-40a6-83b0-37ba70d84931" (UID: "fe5a82c2-728c-40a6-83b0-37ba70d84931"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.206915 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cc38ea-1412-4e17-9c74-779b7c6d701c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6cc38ea-1412-4e17-9c74-779b7c6d701c" (UID: "f6cc38ea-1412-4e17-9c74-779b7c6d701c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.209979 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe5a82c2-728c-40a6-83b0-37ba70d84931-kube-api-access-wrjr9" (OuterVolumeSpecName: "kube-api-access-wrjr9") pod "fe5a82c2-728c-40a6-83b0-37ba70d84931" (UID: "fe5a82c2-728c-40a6-83b0-37ba70d84931"). InnerVolumeSpecName "kube-api-access-wrjr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.210518 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cc38ea-1412-4e17-9c74-779b7c6d701c-kube-api-access-76ttz" (OuterVolumeSpecName: "kube-api-access-76ttz") pod "f6cc38ea-1412-4e17-9c74-779b7c6d701c" (UID: "f6cc38ea-1412-4e17-9c74-779b7c6d701c"). InnerVolumeSpecName "kube-api-access-76ttz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.307039 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrjr9\" (UniqueName: \"kubernetes.io/projected/fe5a82c2-728c-40a6-83b0-37ba70d84931-kube-api-access-wrjr9\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.307090 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6cc38ea-1412-4e17-9c74-779b7c6d701c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.307110 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe5a82c2-728c-40a6-83b0-37ba70d84931-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.307129 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76ttz\" (UniqueName: \"kubernetes.io/projected/f6cc38ea-1412-4e17-9c74-779b7c6d701c-kube-api-access-76ttz\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.652068 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8be9-account-create-update-4qptt" event={"ID":"f6cc38ea-1412-4e17-9c74-779b7c6d701c","Type":"ContainerDied","Data":"8775712b1867d046021ff197ce3cb48cf5322c9e656c2529b4c751dd69d648f5"} Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.652485 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8775712b1867d046021ff197ce3cb48cf5322c9e656c2529b4c751dd69d648f5" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.652197 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8be9-account-create-update-4qptt" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.654755 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fcgh6" event={"ID":"fe5a82c2-728c-40a6-83b0-37ba70d84931","Type":"ContainerDied","Data":"4b0033d6fb69375936e0004de30bd4a3ca48c1df256811c61ab55f5388f86d4c"} Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.654825 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b0033d6fb69375936e0004de30bd4a3ca48c1df256811c61ab55f5388f86d4c" Jan 30 06:36:36 crc kubenswrapper[4931]: I0130 06:36:36.654844 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fcgh6" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.299181 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-529p5"] Jan 30 06:36:38 crc kubenswrapper[4931]: E0130 06:36:38.299641 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe5a82c2-728c-40a6-83b0-37ba70d84931" containerName="mariadb-database-create" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.299656 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe5a82c2-728c-40a6-83b0-37ba70d84931" containerName="mariadb-database-create" Jan 30 06:36:38 crc kubenswrapper[4931]: E0130 06:36:38.299680 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cc38ea-1412-4e17-9c74-779b7c6d701c" containerName="mariadb-account-create-update" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.299688 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cc38ea-1412-4e17-9c74-779b7c6d701c" containerName="mariadb-account-create-update" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.299905 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cc38ea-1412-4e17-9c74-779b7c6d701c" containerName="mariadb-account-create-update" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.299933 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe5a82c2-728c-40a6-83b0-37ba70d84931" containerName="mariadb-database-create" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.300599 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.307772 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8bfcx" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.307796 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.310155 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-529p5"] Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.446006 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfvk6\" (UniqueName: \"kubernetes.io/projected/bff91271-f1e2-4aaf-adec-bc61ce9dedad-kube-api-access-jfvk6\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.446098 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-config-data\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.446166 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-combined-ca-bundle\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.446245 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-db-sync-config-data\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.548090 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfvk6\" (UniqueName: \"kubernetes.io/projected/bff91271-f1e2-4aaf-adec-bc61ce9dedad-kube-api-access-jfvk6\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.548501 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-config-data\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.548547 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-combined-ca-bundle\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.548569 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-db-sync-config-data\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.570178 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-db-sync-config-data\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.570190 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-combined-ca-bundle\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.570193 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-config-data\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.581049 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfvk6\" (UniqueName: \"kubernetes.io/projected/bff91271-f1e2-4aaf-adec-bc61ce9dedad-kube-api-access-jfvk6\") pod \"glance-db-sync-529p5\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.620979 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-529p5" Jan 30 06:36:38 crc kubenswrapper[4931]: I0130 06:36:38.941291 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-529p5"] Jan 30 06:36:39 crc kubenswrapper[4931]: I0130 06:36:39.691076 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-529p5" event={"ID":"bff91271-f1e2-4aaf-adec-bc61ce9dedad","Type":"ContainerStarted","Data":"6f79520471e9a429df8d71872cafba7c48f1385750af4061f0e5ea5c4355f53e"} Jan 30 06:36:39 crc kubenswrapper[4931]: I0130 06:36:39.691401 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-529p5" event={"ID":"bff91271-f1e2-4aaf-adec-bc61ce9dedad","Type":"ContainerStarted","Data":"2b95e40187ff1d07e7b9f697d565d12c73d63ef422465c2985d21a0747596235"} Jan 30 06:36:39 crc kubenswrapper[4931]: I0130 06:36:39.712578 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-529p5" podStartSLOduration=1.712560282 podStartE2EDuration="1.712560282s" podCreationTimestamp="2026-01-30 06:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:36:39.706920354 +0000 UTC m=+5335.076830611" watchObservedRunningTime="2026-01-30 06:36:39.712560282 +0000 UTC m=+5335.082470539" Jan 30 06:36:42 crc kubenswrapper[4931]: I0130 06:36:42.724029 4931 generic.go:334] "Generic (PLEG): container finished" podID="bff91271-f1e2-4aaf-adec-bc61ce9dedad" containerID="6f79520471e9a429df8d71872cafba7c48f1385750af4061f0e5ea5c4355f53e" exitCode=0 Jan 30 06:36:42 crc kubenswrapper[4931]: I0130 06:36:42.724093 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-529p5" event={"ID":"bff91271-f1e2-4aaf-adec-bc61ce9dedad","Type":"ContainerDied","Data":"6f79520471e9a429df8d71872cafba7c48f1385750af4061f0e5ea5c4355f53e"} Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.230044 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-529p5" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.266204 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-combined-ca-bundle\") pod \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.266317 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-db-sync-config-data\") pod \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.266378 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-config-data\") pod \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.266463 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfvk6\" (UniqueName: \"kubernetes.io/projected/bff91271-f1e2-4aaf-adec-bc61ce9dedad-kube-api-access-jfvk6\") pod \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\" (UID: \"bff91271-f1e2-4aaf-adec-bc61ce9dedad\") " Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.273280 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff91271-f1e2-4aaf-adec-bc61ce9dedad-kube-api-access-jfvk6" (OuterVolumeSpecName: "kube-api-access-jfvk6") pod "bff91271-f1e2-4aaf-adec-bc61ce9dedad" (UID: "bff91271-f1e2-4aaf-adec-bc61ce9dedad"). InnerVolumeSpecName "kube-api-access-jfvk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.274047 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bff91271-f1e2-4aaf-adec-bc61ce9dedad" (UID: "bff91271-f1e2-4aaf-adec-bc61ce9dedad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.312833 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bff91271-f1e2-4aaf-adec-bc61ce9dedad" (UID: "bff91271-f1e2-4aaf-adec-bc61ce9dedad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.319021 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-config-data" (OuterVolumeSpecName: "config-data") pod "bff91271-f1e2-4aaf-adec-bc61ce9dedad" (UID: "bff91271-f1e2-4aaf-adec-bc61ce9dedad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.367988 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.368029 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.368042 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfvk6\" (UniqueName: \"kubernetes.io/projected/bff91271-f1e2-4aaf-adec-bc61ce9dedad-kube-api-access-jfvk6\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.368055 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff91271-f1e2-4aaf-adec-bc61ce9dedad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.747450 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-529p5" event={"ID":"bff91271-f1e2-4aaf-adec-bc61ce9dedad","Type":"ContainerDied","Data":"2b95e40187ff1d07e7b9f697d565d12c73d63ef422465c2985d21a0747596235"} Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.747737 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b95e40187ff1d07e7b9f697d565d12c73d63ef422465c2985d21a0747596235" Jan 30 06:36:44 crc kubenswrapper[4931]: I0130 06:36:44.747600 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-529p5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.068843 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:45 crc kubenswrapper[4931]: E0130 06:36:45.069454 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff91271-f1e2-4aaf-adec-bc61ce9dedad" containerName="glance-db-sync" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.069578 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff91271-f1e2-4aaf-adec-bc61ce9dedad" containerName="glance-db-sync" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.069878 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff91271-f1e2-4aaf-adec-bc61ce9dedad" containerName="glance-db-sync" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.071047 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.073994 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.074087 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.074643 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.074990 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8bfcx" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.090332 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.191260 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.191661 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ls9v\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-kube-api-access-8ls9v\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.191717 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.191768 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.191814 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.191852 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-ceph\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.191884 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-logs\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.227651 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cf7fddbc7-982b5"] Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.228977 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.247212 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cf7fddbc7-982b5"] Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.293257 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ls9v\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-kube-api-access-8ls9v\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.293315 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.293361 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294080 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294134 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294159 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-config\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294177 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-ceph\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294206 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6k6k\" (UniqueName: \"kubernetes.io/projected/e7f7543a-72f1-4937-95d9-8869b77ab81d-kube-api-access-s6k6k\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294223 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-logs\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294260 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294282 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-dns-svc\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294854 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.294864 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-logs\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.298945 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.311010 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.313160 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.313530 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-ceph\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.317661 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ls9v\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-kube-api-access-8ls9v\") pod \"glance-default-external-api-0\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.319090 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.321052 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.326136 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.341864 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396046 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-dns-svc\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396302 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396446 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396533 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjfql\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-kube-api-access-mjfql\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396628 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396701 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396781 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-logs\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396855 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.396925 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.397004 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-config\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.397070 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-dns-svc\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.397074 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.397081 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6k6k\" (UniqueName: \"kubernetes.io/projected/e7f7543a-72f1-4937-95d9-8869b77ab81d-kube-api-access-s6k6k\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.397191 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-ceph\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.397606 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.397859 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-config\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.399571 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.413382 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6k6k\" (UniqueName: \"kubernetes.io/projected/e7f7543a-72f1-4937-95d9-8869b77ab81d-kube-api-access-s6k6k\") pod \"dnsmasq-dns-7cf7fddbc7-982b5\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.498315 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.498615 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.499114 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-logs\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.499168 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.499187 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.499220 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.499280 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-ceph\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.499358 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjfql\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-kube-api-access-mjfql\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.499559 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-logs\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.506063 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.518109 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.521008 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-ceph\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.525569 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjfql\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-kube-api-access-mjfql\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.525936 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.560094 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:45 crc kubenswrapper[4931]: I0130 06:36:45.577051 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.101633 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.132741 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:46 crc kubenswrapper[4931]: W0130 06:36:46.153102 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5ea0350_921c_4016_861b_f61da343aaa6.slice/crio-4a2868e0aa0bfd6a5e0709070490840695d3b5f971138818dd14c1cb2a50ec7b WatchSource:0}: Error finding container 4a2868e0aa0bfd6a5e0709070490840695d3b5f971138818dd14c1cb2a50ec7b: Status 404 returned error can't find the container with id 4a2868e0aa0bfd6a5e0709070490840695d3b5f971138818dd14c1cb2a50ec7b Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.184803 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cf7fddbc7-982b5"] Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.237102 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:46 crc kubenswrapper[4931]: W0130 06:36:46.244322 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaae4a7a6_c1f4_4fd4_a844_f35eef27ffbc.slice/crio-68279e3601f3bb1874e45c1fa4e255a1034408fe199037496d6fdf3b86235715 WatchSource:0}: Error finding container 68279e3601f3bb1874e45c1fa4e255a1034408fe199037496d6fdf3b86235715: Status 404 returned error can't find the container with id 68279e3601f3bb1874e45c1fa4e255a1034408fe199037496d6fdf3b86235715 Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.772390 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc","Type":"ContainerStarted","Data":"68279e3601f3bb1874e45c1fa4e255a1034408fe199037496d6fdf3b86235715"} Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.773613 4931 generic.go:334] "Generic (PLEG): container finished" podID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerID="7119d5284982674648c0826e4626a711e51ca2133b6b1310bb2a6ca06e64c6b3" exitCode=0 Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.773652 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" event={"ID":"e7f7543a-72f1-4937-95d9-8869b77ab81d","Type":"ContainerDied","Data":"7119d5284982674648c0826e4626a711e51ca2133b6b1310bb2a6ca06e64c6b3"} Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.773668 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" event={"ID":"e7f7543a-72f1-4937-95d9-8869b77ab81d","Type":"ContainerStarted","Data":"424c4112ef530e297d0cff0d4af771a05512506e55897ed954c10a6043fe7171"} Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.778127 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ea0350-921c-4016-861b-f61da343aaa6","Type":"ContainerStarted","Data":"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac"} Jan 30 06:36:46 crc kubenswrapper[4931]: I0130 06:36:46.778160 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ea0350-921c-4016-861b-f61da343aaa6","Type":"ContainerStarted","Data":"4a2868e0aa0bfd6a5e0709070490840695d3b5f971138818dd14c1cb2a50ec7b"} Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.804996 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" event={"ID":"e7f7543a-72f1-4937-95d9-8869b77ab81d","Type":"ContainerStarted","Data":"64b8f4307220d8bf205a24561416d6dc5c9e47a88b8a78a79bc73aae4b6035c4"} Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.805453 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.808969 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ea0350-921c-4016-861b-f61da343aaa6","Type":"ContainerStarted","Data":"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce"} Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.809074 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-log" containerID="cri-o://a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac" gracePeriod=30 Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.809251 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-httpd" containerID="cri-o://b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce" gracePeriod=30 Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.811701 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc","Type":"ContainerStarted","Data":"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a"} Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.811722 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc","Type":"ContainerStarted","Data":"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac"} Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.828134 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" podStartSLOduration=2.8281182620000003 podStartE2EDuration="2.828118262s" podCreationTimestamp="2026-01-30 06:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:36:47.822708611 +0000 UTC m=+5343.192618908" watchObservedRunningTime="2026-01-30 06:36:47.828118262 +0000 UTC m=+5343.198028519" Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.849157 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.84913923 podStartE2EDuration="2.84913923s" podCreationTimestamp="2026-01-30 06:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:36:47.845465018 +0000 UTC m=+5343.215375275" watchObservedRunningTime="2026-01-30 06:36:47.84913923 +0000 UTC m=+5343.219049487" Jan 30 06:36:47 crc kubenswrapper[4931]: I0130 06:36:47.876069 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.876054773 podStartE2EDuration="2.876054773s" podCreationTimestamp="2026-01-30 06:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:36:47.873324167 +0000 UTC m=+5343.243234424" watchObservedRunningTime="2026-01-30 06:36:47.876054773 +0000 UTC m=+5343.245965030" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.197980 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.428292 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.553883 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-combined-ca-bundle\") pod \"d5ea0350-921c-4016-861b-f61da343aaa6\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.553957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-httpd-run\") pod \"d5ea0350-921c-4016-861b-f61da343aaa6\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.554067 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-config-data\") pod \"d5ea0350-921c-4016-861b-f61da343aaa6\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.554115 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-scripts\") pod \"d5ea0350-921c-4016-861b-f61da343aaa6\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.554134 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ls9v\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-kube-api-access-8ls9v\") pod \"d5ea0350-921c-4016-861b-f61da343aaa6\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.554154 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-ceph\") pod \"d5ea0350-921c-4016-861b-f61da343aaa6\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.554205 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-logs\") pod \"d5ea0350-921c-4016-861b-f61da343aaa6\" (UID: \"d5ea0350-921c-4016-861b-f61da343aaa6\") " Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.554960 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-logs" (OuterVolumeSpecName: "logs") pod "d5ea0350-921c-4016-861b-f61da343aaa6" (UID: "d5ea0350-921c-4016-861b-f61da343aaa6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.557616 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d5ea0350-921c-4016-861b-f61da343aaa6" (UID: "d5ea0350-921c-4016-861b-f61da343aaa6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.560235 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-kube-api-access-8ls9v" (OuterVolumeSpecName: "kube-api-access-8ls9v") pod "d5ea0350-921c-4016-861b-f61da343aaa6" (UID: "d5ea0350-921c-4016-861b-f61da343aaa6"). InnerVolumeSpecName "kube-api-access-8ls9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.561700 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-scripts" (OuterVolumeSpecName: "scripts") pod "d5ea0350-921c-4016-861b-f61da343aaa6" (UID: "d5ea0350-921c-4016-861b-f61da343aaa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.562654 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-ceph" (OuterVolumeSpecName: "ceph") pod "d5ea0350-921c-4016-861b-f61da343aaa6" (UID: "d5ea0350-921c-4016-861b-f61da343aaa6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.595288 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ea0350-921c-4016-861b-f61da343aaa6" (UID: "d5ea0350-921c-4016-861b-f61da343aaa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.622587 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-config-data" (OuterVolumeSpecName: "config-data") pod "d5ea0350-921c-4016-861b-f61da343aaa6" (UID: "d5ea0350-921c-4016-861b-f61da343aaa6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.656085 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.656114 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.656125 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ls9v\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-kube-api-access-8ls9v\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.656134 4931 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d5ea0350-921c-4016-861b-f61da343aaa6-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.656142 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.656149 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ea0350-921c-4016-861b-f61da343aaa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.656157 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ea0350-921c-4016-861b-f61da343aaa6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.823071 4931 generic.go:334] "Generic (PLEG): container finished" podID="d5ea0350-921c-4016-861b-f61da343aaa6" containerID="b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce" exitCode=0 Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.823107 4931 generic.go:334] "Generic (PLEG): container finished" podID="d5ea0350-921c-4016-861b-f61da343aaa6" containerID="a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac" exitCode=143 Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.824023 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.825551 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ea0350-921c-4016-861b-f61da343aaa6","Type":"ContainerDied","Data":"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce"} Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.825593 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ea0350-921c-4016-861b-f61da343aaa6","Type":"ContainerDied","Data":"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac"} Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.825612 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5ea0350-921c-4016-861b-f61da343aaa6","Type":"ContainerDied","Data":"4a2868e0aa0bfd6a5e0709070490840695d3b5f971138818dd14c1cb2a50ec7b"} Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.825631 4931 scope.go:117] "RemoveContainer" containerID="b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.861991 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.865872 4931 scope.go:117] "RemoveContainer" containerID="a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.870823 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.890286 4931 scope.go:117] "RemoveContainer" containerID="b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce" Jan 30 06:36:48 crc kubenswrapper[4931]: E0130 06:36:48.890921 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce\": container with ID starting with b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce not found: ID does not exist" containerID="b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.890980 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce"} err="failed to get container status \"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce\": rpc error: code = NotFound desc = could not find container \"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce\": container with ID starting with b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce not found: ID does not exist" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.891007 4931 scope.go:117] "RemoveContainer" containerID="a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.891143 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:48 crc kubenswrapper[4931]: E0130 06:36:48.891528 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac\": container with ID starting with a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac not found: ID does not exist" containerID="a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.891569 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac"} err="failed to get container status \"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac\": rpc error: code = NotFound desc = could not find container \"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac\": container with ID starting with a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac not found: ID does not exist" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.891608 4931 scope.go:117] "RemoveContainer" containerID="b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce" Jan 30 06:36:48 crc kubenswrapper[4931]: E0130 06:36:48.891547 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-log" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.891662 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-log" Jan 30 06:36:48 crc kubenswrapper[4931]: E0130 06:36:48.891714 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-httpd" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.891722 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-httpd" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.892262 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-log" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.892307 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" containerName="glance-httpd" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.892929 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce"} err="failed to get container status \"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce\": rpc error: code = NotFound desc = could not find container \"b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce\": container with ID starting with b115f2b1a1dc0c68c9d754fe7e52de76fd5115ea1dd25ac48c3263f542fd97ce not found: ID does not exist" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.892970 4931 scope.go:117] "RemoveContainer" containerID="a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.893595 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.894527 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac"} err="failed to get container status \"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac\": rpc error: code = NotFound desc = could not find container \"a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac\": container with ID starting with a9ae1b83e11509d273525802f75cf08999ac0b5f8cf14f23e66ab7bd4a18dcac not found: ID does not exist" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.896639 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 06:36:48 crc kubenswrapper[4931]: I0130 06:36:48.909018 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.062857 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.063003 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.063145 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.063214 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.063272 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8j2s\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-kube-api-access-h8j2s\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.063447 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-logs\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.063517 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-ceph\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165207 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165271 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8j2s\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-kube-api-access-h8j2s\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165324 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-logs\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165355 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-ceph\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165387 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165468 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165502 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.165888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-logs\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.166068 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.168492 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-ceph\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.169304 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.169858 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.170343 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.184861 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8j2s\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-kube-api-access-h8j2s\") pod \"glance-default-external-api-0\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.213952 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.436614 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ea0350-921c-4016-861b-f61da343aaa6" path="/var/lib/kubelet/pods/d5ea0350-921c-4016-861b-f61da343aaa6/volumes" Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.604947 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:36:49 crc kubenswrapper[4931]: W0130 06:36:49.607737 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ab7585b_916e_4a6a_8aa8_da769aaa437e.slice/crio-266032ac228e0593b292c8f7becc87b38799dd433a3129a19387d1cf6ec27145 WatchSource:0}: Error finding container 266032ac228e0593b292c8f7becc87b38799dd433a3129a19387d1cf6ec27145: Status 404 returned error can't find the container with id 266032ac228e0593b292c8f7becc87b38799dd433a3129a19387d1cf6ec27145 Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.836964 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ab7585b-916e-4a6a-8aa8-da769aaa437e","Type":"ContainerStarted","Data":"266032ac228e0593b292c8f7becc87b38799dd433a3129a19387d1cf6ec27145"} Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.839900 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-log" containerID="cri-o://cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac" gracePeriod=30 Jan 30 06:36:49 crc kubenswrapper[4931]: I0130 06:36:49.840014 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-httpd" containerID="cri-o://467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a" gracePeriod=30 Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.378781 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.509095 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-httpd-run\") pod \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.509201 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-combined-ca-bundle\") pod \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.509332 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-scripts\") pod \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.509357 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-config-data\") pod \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.509381 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-logs\") pod \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.509412 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-ceph\") pod \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.509465 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjfql\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-kube-api-access-mjfql\") pod \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\" (UID: \"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc\") " Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.515826 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-logs" (OuterVolumeSpecName: "logs") pod "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" (UID: "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.520940 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-kube-api-access-mjfql" (OuterVolumeSpecName: "kube-api-access-mjfql") pod "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" (UID: "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc"). InnerVolumeSpecName "kube-api-access-mjfql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.531263 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-scripts" (OuterVolumeSpecName: "scripts") pod "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" (UID: "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.535719 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" (UID: "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.537619 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-ceph" (OuterVolumeSpecName: "ceph") pod "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" (UID: "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.613023 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.613048 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.613056 4931 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.613064 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjfql\" (UniqueName: \"kubernetes.io/projected/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-kube-api-access-mjfql\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.613075 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.631793 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" (UID: "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.648588 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-config-data" (OuterVolumeSpecName: "config-data") pod "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" (UID: "aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.715850 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.715886 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.848516 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ab7585b-916e-4a6a-8aa8-da769aaa437e","Type":"ContainerStarted","Data":"9f0644fcac396be5d0abd6c6b2b21170c61642dc6f5db73f638093f772151226"} Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.848558 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ab7585b-916e-4a6a-8aa8-da769aaa437e","Type":"ContainerStarted","Data":"d05f5e569f00e57864985b93b02a9cff6181003207de011c326a946bc0f2b2f3"} Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.850934 4931 generic.go:334] "Generic (PLEG): container finished" podID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerID="467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a" exitCode=0 Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.850971 4931 generic.go:334] "Generic (PLEG): container finished" podID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerID="cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac" exitCode=143 Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.850993 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc","Type":"ContainerDied","Data":"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a"} Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.851020 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc","Type":"ContainerDied","Data":"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac"} Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.851032 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc","Type":"ContainerDied","Data":"68279e3601f3bb1874e45c1fa4e255a1034408fe199037496d6fdf3b86235715"} Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.851046 4931 scope.go:117] "RemoveContainer" containerID="467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.851145 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.882640 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.882623085 podStartE2EDuration="2.882623085s" podCreationTimestamp="2026-01-30 06:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:36:50.878014906 +0000 UTC m=+5346.247925173" watchObservedRunningTime="2026-01-30 06:36:50.882623085 +0000 UTC m=+5346.252533352" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.883525 4931 scope.go:117] "RemoveContainer" containerID="cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.901801 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.912131 4931 scope.go:117] "RemoveContainer" containerID="467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a" Jan 30 06:36:50 crc kubenswrapper[4931]: E0130 06:36:50.912863 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a\": container with ID starting with 467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a not found: ID does not exist" containerID="467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.912902 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a"} err="failed to get container status \"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a\": rpc error: code = NotFound desc = could not find container \"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a\": container with ID starting with 467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a not found: ID does not exist" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.912928 4931 scope.go:117] "RemoveContainer" containerID="cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.913311 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:50 crc kubenswrapper[4931]: E0130 06:36:50.913386 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac\": container with ID starting with cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac not found: ID does not exist" containerID="cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.913501 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac"} err="failed to get container status \"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac\": rpc error: code = NotFound desc = could not find container \"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac\": container with ID starting with cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac not found: ID does not exist" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.913535 4931 scope.go:117] "RemoveContainer" containerID="467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.913809 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a"} err="failed to get container status \"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a\": rpc error: code = NotFound desc = could not find container \"467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a\": container with ID starting with 467241cd20503a3411a5230450fcdfda57f7e8c27d8d4b3f5f4ca872d524f71a not found: ID does not exist" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.913842 4931 scope.go:117] "RemoveContainer" containerID="cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.914099 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac"} err="failed to get container status \"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac\": rpc error: code = NotFound desc = could not find container \"cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac\": container with ID starting with cbb16f4eedd7535a133df59a844dc93e0135e0f924a3118f1d6c3d755c5826ac not found: ID does not exist" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.930856 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:50 crc kubenswrapper[4931]: E0130 06:36:50.931273 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-log" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.931285 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-log" Jan 30 06:36:50 crc kubenswrapper[4931]: E0130 06:36:50.931305 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-httpd" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.931311 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-httpd" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.931494 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-httpd" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.931503 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" containerName="glance-log" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.932452 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.936348 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 06:36:50 crc kubenswrapper[4931]: I0130 06:36:50.938449 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.033288 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7hqx\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-kube-api-access-x7hqx\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.033379 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-ceph\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.033403 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-logs\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.033459 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.033772 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.033875 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.034062 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.135868 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-ceph\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.136117 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-logs\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.136174 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.136213 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.136250 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.136307 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.136348 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7hqx\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-kube-api-access-x7hqx\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.136805 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.137101 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-logs\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.139839 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.139920 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-ceph\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.139991 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.140916 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.158012 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7hqx\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-kube-api-access-x7hqx\") pod \"glance-default-internal-api-0\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.247654 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.442116 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc" path="/var/lib/kubelet/pods/aae4a7a6-c1f4-4fd4-a844-f35eef27ffbc/volumes" Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.625486 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:36:51 crc kubenswrapper[4931]: I0130 06:36:51.879920 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50e397ef-0630-40db-a591-28d7584dee76","Type":"ContainerStarted","Data":"a16e5756f39cf431b53b36d453d1cc052129da02cb0cac04cc9d6bca4777d6a5"} Jan 30 06:36:52 crc kubenswrapper[4931]: I0130 06:36:52.890043 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50e397ef-0630-40db-a591-28d7584dee76","Type":"ContainerStarted","Data":"d2e2adf695545abae4fa1879a24ad260b75515f25d086b16166bdfe80e55cc22"} Jan 30 06:36:52 crc kubenswrapper[4931]: I0130 06:36:52.890575 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50e397ef-0630-40db-a591-28d7584dee76","Type":"ContainerStarted","Data":"32f7ab137db348695ddb60d19f60238d0ad9feb2d7bfb8d7925247bd1ac76d50"} Jan 30 06:36:52 crc kubenswrapper[4931]: I0130 06:36:52.920975 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.92095278 podStartE2EDuration="2.92095278s" podCreationTimestamp="2026-01-30 06:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:36:52.917813032 +0000 UTC m=+5348.287723329" watchObservedRunningTime="2026-01-30 06:36:52.92095278 +0000 UTC m=+5348.290863037" Jan 30 06:36:54 crc kubenswrapper[4931]: I0130 06:36:54.228121 4931 scope.go:117] "RemoveContainer" containerID="173f20cd392f59bb3d09e8d879e9a2c54ad0461fcc8850325a690b330805f7aa" Jan 30 06:36:54 crc kubenswrapper[4931]: I0130 06:36:54.317290 4931 scope.go:117] "RemoveContainer" containerID="c20d2d48ca6794144eddad5037d464e6a9ffdad2028bd7ef00590c377c6183ff" Jan 30 06:36:55 crc kubenswrapper[4931]: I0130 06:36:55.562812 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:36:55 crc kubenswrapper[4931]: I0130 06:36:55.627870 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d84887cc5-thvpx"] Jan 30 06:36:55 crc kubenswrapper[4931]: I0130 06:36:55.628183 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerName="dnsmasq-dns" containerID="cri-o://9a53ddc494ac40f2cd235561690db17e6fa36bf1d84c504bb1475d76a3081caf" gracePeriod=10 Jan 30 06:36:55 crc kubenswrapper[4931]: I0130 06:36:55.917358 4931 generic.go:334] "Generic (PLEG): container finished" podID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerID="9a53ddc494ac40f2cd235561690db17e6fa36bf1d84c504bb1475d76a3081caf" exitCode=0 Jan 30 06:36:55 crc kubenswrapper[4931]: I0130 06:36:55.917397 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" event={"ID":"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5","Type":"ContainerDied","Data":"9a53ddc494ac40f2cd235561690db17e6fa36bf1d84c504bb1475d76a3081caf"} Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.098657 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.228624 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-nb\") pod \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.229305 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-sb\") pod \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.229370 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv7bs\" (UniqueName: \"kubernetes.io/projected/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-kube-api-access-qv7bs\") pod \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.229443 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-dns-svc\") pod \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.229506 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-config\") pod \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\" (UID: \"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5\") " Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.233850 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-kube-api-access-qv7bs" (OuterVolumeSpecName: "kube-api-access-qv7bs") pod "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" (UID: "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5"). InnerVolumeSpecName "kube-api-access-qv7bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.268065 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" (UID: "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.275100 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" (UID: "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.292665 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" (UID: "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.301213 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-config" (OuterVolumeSpecName: "config") pod "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" (UID: "11ff05b0-f35c-4f00-b0a7-a59f0368d4f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.331205 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.331251 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv7bs\" (UniqueName: \"kubernetes.io/projected/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-kube-api-access-qv7bs\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.331266 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.331281 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.331294 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.941348 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" event={"ID":"11ff05b0-f35c-4f00-b0a7-a59f0368d4f5","Type":"ContainerDied","Data":"48bcc562af333fbf9303da09ed90c070b458fec2e34b83616d91ef56cccabb57"} Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.941413 4931 scope.go:117] "RemoveContainer" containerID="9a53ddc494ac40f2cd235561690db17e6fa36bf1d84c504bb1475d76a3081caf" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.941525 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.973178 4931 scope.go:117] "RemoveContainer" containerID="753bd88c07189bffc8d69c9bf6a34e6d86af3f56c735e6c62d25cd5e5e4da562" Jan 30 06:36:56 crc kubenswrapper[4931]: I0130 06:36:56.991530 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d84887cc5-thvpx"] Jan 30 06:36:57 crc kubenswrapper[4931]: I0130 06:36:57.001648 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d84887cc5-thvpx"] Jan 30 06:36:57 crc kubenswrapper[4931]: I0130 06:36:57.431199 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" path="/var/lib/kubelet/pods/11ff05b0-f35c-4f00-b0a7-a59f0368d4f5/volumes" Jan 30 06:36:59 crc kubenswrapper[4931]: I0130 06:36:59.214684 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 06:36:59 crc kubenswrapper[4931]: I0130 06:36:59.214766 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 06:36:59 crc kubenswrapper[4931]: I0130 06:36:59.268243 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 06:36:59 crc kubenswrapper[4931]: I0130 06:36:59.307790 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 06:36:59 crc kubenswrapper[4931]: I0130 06:36:59.972554 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 06:36:59 crc kubenswrapper[4931]: I0130 06:36:59.972855 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 06:37:00 crc kubenswrapper[4931]: I0130 06:37:00.825649 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-d84887cc5-thvpx" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.39:5353: i/o timeout" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.248133 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.248251 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.285155 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.302202 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.842625 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.920013 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.993625 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:01 crc kubenswrapper[4931]: I0130 06:37:01.993682 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:03 crc kubenswrapper[4931]: I0130 06:37:03.881200 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:03 crc kubenswrapper[4931]: I0130 06:37:03.972537 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.275677 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zkc49"] Jan 30 06:37:10 crc kubenswrapper[4931]: E0130 06:37:10.276455 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerName="dnsmasq-dns" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.276467 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerName="dnsmasq-dns" Jan 30 06:37:10 crc kubenswrapper[4931]: E0130 06:37:10.276488 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerName="init" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.276495 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerName="init" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.276651 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ff05b0-f35c-4f00-b0a7-a59f0368d4f5" containerName="dnsmasq-dns" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.277180 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.286981 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zkc49"] Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.324752 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mgs\" (UniqueName: \"kubernetes.io/projected/80acfb99-2d96-453a-b29a-62f23608dd5f-kube-api-access-l8mgs\") pod \"placement-db-create-zkc49\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.324977 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80acfb99-2d96-453a-b29a-62f23608dd5f-operator-scripts\") pod \"placement-db-create-zkc49\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.393344 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9f04-account-create-update-wgg9g"] Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.395375 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.402070 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.405708 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9f04-account-create-update-wgg9g"] Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.426763 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8mgs\" (UniqueName: \"kubernetes.io/projected/80acfb99-2d96-453a-b29a-62f23608dd5f-kube-api-access-l8mgs\") pod \"placement-db-create-zkc49\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.427674 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80acfb99-2d96-453a-b29a-62f23608dd5f-operator-scripts\") pod \"placement-db-create-zkc49\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.430959 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80acfb99-2d96-453a-b29a-62f23608dd5f-operator-scripts\") pod \"placement-db-create-zkc49\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.445826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8mgs\" (UniqueName: \"kubernetes.io/projected/80acfb99-2d96-453a-b29a-62f23608dd5f-kube-api-access-l8mgs\") pod \"placement-db-create-zkc49\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.529731 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d676c50-5909-4eeb-a22b-63823761ab17-operator-scripts\") pod \"placement-9f04-account-create-update-wgg9g\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.529780 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gscjc\" (UniqueName: \"kubernetes.io/projected/7d676c50-5909-4eeb-a22b-63823761ab17-kube-api-access-gscjc\") pod \"placement-9f04-account-create-update-wgg9g\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.611507 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zkc49" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.631734 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d676c50-5909-4eeb-a22b-63823761ab17-operator-scripts\") pod \"placement-9f04-account-create-update-wgg9g\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.631787 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gscjc\" (UniqueName: \"kubernetes.io/projected/7d676c50-5909-4eeb-a22b-63823761ab17-kube-api-access-gscjc\") pod \"placement-9f04-account-create-update-wgg9g\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.632528 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d676c50-5909-4eeb-a22b-63823761ab17-operator-scripts\") pod \"placement-9f04-account-create-update-wgg9g\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.654827 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gscjc\" (UniqueName: \"kubernetes.io/projected/7d676c50-5909-4eeb-a22b-63823761ab17-kube-api-access-gscjc\") pod \"placement-9f04-account-create-update-wgg9g\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:10 crc kubenswrapper[4931]: I0130 06:37:10.719946 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:11 crc kubenswrapper[4931]: I0130 06:37:11.063139 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zkc49"] Jan 30 06:37:11 crc kubenswrapper[4931]: W0130 06:37:11.066069 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80acfb99_2d96_453a_b29a_62f23608dd5f.slice/crio-4e88e0fb7db435e0262dd9c96e86538cd63b93d89b1584f0a7ac9701af498ac7 WatchSource:0}: Error finding container 4e88e0fb7db435e0262dd9c96e86538cd63b93d89b1584f0a7ac9701af498ac7: Status 404 returned error can't find the container with id 4e88e0fb7db435e0262dd9c96e86538cd63b93d89b1584f0a7ac9701af498ac7 Jan 30 06:37:11 crc kubenswrapper[4931]: I0130 06:37:11.091495 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zkc49" event={"ID":"80acfb99-2d96-453a-b29a-62f23608dd5f","Type":"ContainerStarted","Data":"4e88e0fb7db435e0262dd9c96e86538cd63b93d89b1584f0a7ac9701af498ac7"} Jan 30 06:37:11 crc kubenswrapper[4931]: I0130 06:37:11.186635 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9f04-account-create-update-wgg9g"] Jan 30 06:37:11 crc kubenswrapper[4931]: W0130 06:37:11.194312 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d676c50_5909_4eeb_a22b_63823761ab17.slice/crio-dfb836c72c923aa77e375aaf41768e76c04d5c434f7b1d66cdb2624a0eb722f9 WatchSource:0}: Error finding container dfb836c72c923aa77e375aaf41768e76c04d5c434f7b1d66cdb2624a0eb722f9: Status 404 returned error can't find the container with id dfb836c72c923aa77e375aaf41768e76c04d5c434f7b1d66cdb2624a0eb722f9 Jan 30 06:37:12 crc kubenswrapper[4931]: I0130 06:37:12.101832 4931 generic.go:334] "Generic (PLEG): container finished" podID="80acfb99-2d96-453a-b29a-62f23608dd5f" containerID="d17e4d5da3cbd98de6ce9452e4b21d30ccf3ad5f026d8b30b101024dc6fd4576" exitCode=0 Jan 30 06:37:12 crc kubenswrapper[4931]: I0130 06:37:12.101908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zkc49" event={"ID":"80acfb99-2d96-453a-b29a-62f23608dd5f","Type":"ContainerDied","Data":"d17e4d5da3cbd98de6ce9452e4b21d30ccf3ad5f026d8b30b101024dc6fd4576"} Jan 30 06:37:12 crc kubenswrapper[4931]: I0130 06:37:12.104362 4931 generic.go:334] "Generic (PLEG): container finished" podID="7d676c50-5909-4eeb-a22b-63823761ab17" containerID="312f2f7be76e7df2d1974a8fc3d9bcd846d76d9bf2e6bb52018a7e69743078de" exitCode=0 Jan 30 06:37:12 crc kubenswrapper[4931]: I0130 06:37:12.104446 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9f04-account-create-update-wgg9g" event={"ID":"7d676c50-5909-4eeb-a22b-63823761ab17","Type":"ContainerDied","Data":"312f2f7be76e7df2d1974a8fc3d9bcd846d76d9bf2e6bb52018a7e69743078de"} Jan 30 06:37:12 crc kubenswrapper[4931]: I0130 06:37:12.104477 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9f04-account-create-update-wgg9g" event={"ID":"7d676c50-5909-4eeb-a22b-63823761ab17","Type":"ContainerStarted","Data":"dfb836c72c923aa77e375aaf41768e76c04d5c434f7b1d66cdb2624a0eb722f9"} Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.596992 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.608088 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zkc49" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.686079 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d676c50-5909-4eeb-a22b-63823761ab17-operator-scripts\") pod \"7d676c50-5909-4eeb-a22b-63823761ab17\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.686204 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gscjc\" (UniqueName: \"kubernetes.io/projected/7d676c50-5909-4eeb-a22b-63823761ab17-kube-api-access-gscjc\") pod \"7d676c50-5909-4eeb-a22b-63823761ab17\" (UID: \"7d676c50-5909-4eeb-a22b-63823761ab17\") " Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.686261 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80acfb99-2d96-453a-b29a-62f23608dd5f-operator-scripts\") pod \"80acfb99-2d96-453a-b29a-62f23608dd5f\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.686414 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8mgs\" (UniqueName: \"kubernetes.io/projected/80acfb99-2d96-453a-b29a-62f23608dd5f-kube-api-access-l8mgs\") pod \"80acfb99-2d96-453a-b29a-62f23608dd5f\" (UID: \"80acfb99-2d96-453a-b29a-62f23608dd5f\") " Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.686756 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d676c50-5909-4eeb-a22b-63823761ab17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d676c50-5909-4eeb-a22b-63823761ab17" (UID: "7d676c50-5909-4eeb-a22b-63823761ab17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.687193 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d676c50-5909-4eeb-a22b-63823761ab17-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.688073 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80acfb99-2d96-453a-b29a-62f23608dd5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80acfb99-2d96-453a-b29a-62f23608dd5f" (UID: "80acfb99-2d96-453a-b29a-62f23608dd5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.698853 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d676c50-5909-4eeb-a22b-63823761ab17-kube-api-access-gscjc" (OuterVolumeSpecName: "kube-api-access-gscjc") pod "7d676c50-5909-4eeb-a22b-63823761ab17" (UID: "7d676c50-5909-4eeb-a22b-63823761ab17"). InnerVolumeSpecName "kube-api-access-gscjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.699188 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80acfb99-2d96-453a-b29a-62f23608dd5f-kube-api-access-l8mgs" (OuterVolumeSpecName: "kube-api-access-l8mgs") pod "80acfb99-2d96-453a-b29a-62f23608dd5f" (UID: "80acfb99-2d96-453a-b29a-62f23608dd5f"). InnerVolumeSpecName "kube-api-access-l8mgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.789274 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8mgs\" (UniqueName: \"kubernetes.io/projected/80acfb99-2d96-453a-b29a-62f23608dd5f-kube-api-access-l8mgs\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.789317 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gscjc\" (UniqueName: \"kubernetes.io/projected/7d676c50-5909-4eeb-a22b-63823761ab17-kube-api-access-gscjc\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:13 crc kubenswrapper[4931]: I0130 06:37:13.789331 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80acfb99-2d96-453a-b29a-62f23608dd5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:14 crc kubenswrapper[4931]: I0130 06:37:14.127869 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zkc49" Jan 30 06:37:14 crc kubenswrapper[4931]: I0130 06:37:14.127857 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zkc49" event={"ID":"80acfb99-2d96-453a-b29a-62f23608dd5f","Type":"ContainerDied","Data":"4e88e0fb7db435e0262dd9c96e86538cd63b93d89b1584f0a7ac9701af498ac7"} Jan 30 06:37:14 crc kubenswrapper[4931]: I0130 06:37:14.128058 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e88e0fb7db435e0262dd9c96e86538cd63b93d89b1584f0a7ac9701af498ac7" Jan 30 06:37:14 crc kubenswrapper[4931]: I0130 06:37:14.130826 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9f04-account-create-update-wgg9g" event={"ID":"7d676c50-5909-4eeb-a22b-63823761ab17","Type":"ContainerDied","Data":"dfb836c72c923aa77e375aaf41768e76c04d5c434f7b1d66cdb2624a0eb722f9"} Jan 30 06:37:14 crc kubenswrapper[4931]: I0130 06:37:14.130871 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9f04-account-create-update-wgg9g" Jan 30 06:37:14 crc kubenswrapper[4931]: I0130 06:37:14.130885 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb836c72c923aa77e375aaf41768e76c04d5c434f7b1d66cdb2624a0eb722f9" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.724079 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-x9ngk"] Jan 30 06:37:15 crc kubenswrapper[4931]: E0130 06:37:15.724632 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80acfb99-2d96-453a-b29a-62f23608dd5f" containerName="mariadb-database-create" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.724645 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="80acfb99-2d96-453a-b29a-62f23608dd5f" containerName="mariadb-database-create" Jan 30 06:37:15 crc kubenswrapper[4931]: E0130 06:37:15.724657 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d676c50-5909-4eeb-a22b-63823761ab17" containerName="mariadb-account-create-update" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.724662 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d676c50-5909-4eeb-a22b-63823761ab17" containerName="mariadb-account-create-update" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.724824 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d676c50-5909-4eeb-a22b-63823761ab17" containerName="mariadb-account-create-update" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.724840 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="80acfb99-2d96-453a-b29a-62f23608dd5f" containerName="mariadb-database-create" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.725354 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.728361 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.728543 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.728856 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rbl8r" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.739133 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-x9ngk"] Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.746129 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bf9d65499-j99dc"] Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.748691 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.774064 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf9d65499-j99dc"] Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824590 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-scripts\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824642 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9s8d\" (UniqueName: \"kubernetes.io/projected/08e7d2a9-093c-4495-81ab-99972c72b179-kube-api-access-l9s8d\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824677 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-sb\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824698 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-config-data\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824717 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e7d2a9-093c-4495-81ab-99972c72b179-logs\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824757 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwmf\" (UniqueName: \"kubernetes.io/projected/94c3e877-4729-4777-9460-2fdce31b2bc3-kube-api-access-qrwmf\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824853 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-dns-svc\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824951 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-combined-ca-bundle\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.824986 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-nb\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.825069 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-config\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.926974 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrwmf\" (UniqueName: \"kubernetes.io/projected/94c3e877-4729-4777-9460-2fdce31b2bc3-kube-api-access-qrwmf\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.927043 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-dns-svc\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.927096 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-combined-ca-bundle\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.927121 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-nb\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.927163 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-config\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.927279 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-scripts\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928149 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-dns-svc\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928177 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-config\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928306 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-nb\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928451 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9s8d\" (UniqueName: \"kubernetes.io/projected/08e7d2a9-093c-4495-81ab-99972c72b179-kube-api-access-l9s8d\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928507 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-sb\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928538 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-config-data\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928559 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e7d2a9-093c-4495-81ab-99972c72b179-logs\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.928964 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e7d2a9-093c-4495-81ab-99972c72b179-logs\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.929261 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-sb\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.932764 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-combined-ca-bundle\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.938222 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-config-data\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.941202 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-scripts\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.947897 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9s8d\" (UniqueName: \"kubernetes.io/projected/08e7d2a9-093c-4495-81ab-99972c72b179-kube-api-access-l9s8d\") pod \"placement-db-sync-x9ngk\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:15 crc kubenswrapper[4931]: I0130 06:37:15.951758 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrwmf\" (UniqueName: \"kubernetes.io/projected/94c3e877-4729-4777-9460-2fdce31b2bc3-kube-api-access-qrwmf\") pod \"dnsmasq-dns-bf9d65499-j99dc\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:16 crc kubenswrapper[4931]: I0130 06:37:16.047787 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:16 crc kubenswrapper[4931]: I0130 06:37:16.071845 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:16 crc kubenswrapper[4931]: I0130 06:37:16.550855 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-x9ngk"] Jan 30 06:37:16 crc kubenswrapper[4931]: I0130 06:37:16.668100 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf9d65499-j99dc"] Jan 30 06:37:17 crc kubenswrapper[4931]: I0130 06:37:17.188245 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x9ngk" event={"ID":"08e7d2a9-093c-4495-81ab-99972c72b179","Type":"ContainerStarted","Data":"a9a1121e8223cb02db35e17fa42991de13dcf37e427dbdd0bf16fa8038651093"} Jan 30 06:37:17 crc kubenswrapper[4931]: I0130 06:37:17.188289 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x9ngk" event={"ID":"08e7d2a9-093c-4495-81ab-99972c72b179","Type":"ContainerStarted","Data":"57de75e2f05fade5baa1cc643f4af3f769856d06f6c34e1b83f87baa4389aede"} Jan 30 06:37:17 crc kubenswrapper[4931]: I0130 06:37:17.193950 4931 generic.go:334] "Generic (PLEG): container finished" podID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerID="d3534b4266cad4f8c042a5d1a723852bc5684f3d2cae49aae0ea01f2e1276ee4" exitCode=0 Jan 30 06:37:17 crc kubenswrapper[4931]: I0130 06:37:17.194227 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" event={"ID":"94c3e877-4729-4777-9460-2fdce31b2bc3","Type":"ContainerDied","Data":"d3534b4266cad4f8c042a5d1a723852bc5684f3d2cae49aae0ea01f2e1276ee4"} Jan 30 06:37:17 crc kubenswrapper[4931]: I0130 06:37:17.194250 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" event={"ID":"94c3e877-4729-4777-9460-2fdce31b2bc3","Type":"ContainerStarted","Data":"cbfdef9dfaed32413bc6b3a12689ef221574794a4832dba0bff0f3e04d140623"} Jan 30 06:37:17 crc kubenswrapper[4931]: I0130 06:37:17.215254 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-x9ngk" podStartSLOduration=2.215237447 podStartE2EDuration="2.215237447s" podCreationTimestamp="2026-01-30 06:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:37:17.208176199 +0000 UTC m=+5372.578086466" watchObservedRunningTime="2026-01-30 06:37:17.215237447 +0000 UTC m=+5372.585147704" Jan 30 06:37:18 crc kubenswrapper[4931]: I0130 06:37:18.206180 4931 generic.go:334] "Generic (PLEG): container finished" podID="08e7d2a9-093c-4495-81ab-99972c72b179" containerID="a9a1121e8223cb02db35e17fa42991de13dcf37e427dbdd0bf16fa8038651093" exitCode=0 Jan 30 06:37:18 crc kubenswrapper[4931]: I0130 06:37:18.206280 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x9ngk" event={"ID":"08e7d2a9-093c-4495-81ab-99972c72b179","Type":"ContainerDied","Data":"a9a1121e8223cb02db35e17fa42991de13dcf37e427dbdd0bf16fa8038651093"} Jan 30 06:37:18 crc kubenswrapper[4931]: I0130 06:37:18.212088 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" event={"ID":"94c3e877-4729-4777-9460-2fdce31b2bc3","Type":"ContainerStarted","Data":"2e8dbb3daa04d0b7ca86823625c46d0d78b12efba028999fcdcceb4252172d0f"} Jan 30 06:37:18 crc kubenswrapper[4931]: I0130 06:37:18.212235 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:18 crc kubenswrapper[4931]: I0130 06:37:18.251680 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" podStartSLOduration=3.251663842 podStartE2EDuration="3.251663842s" podCreationTimestamp="2026-01-30 06:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:37:18.245827999 +0000 UTC m=+5373.615738266" watchObservedRunningTime="2026-01-30 06:37:18.251663842 +0000 UTC m=+5373.621574099" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.616937 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.710167 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-scripts\") pod \"08e7d2a9-093c-4495-81ab-99972c72b179\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.710238 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9s8d\" (UniqueName: \"kubernetes.io/projected/08e7d2a9-093c-4495-81ab-99972c72b179-kube-api-access-l9s8d\") pod \"08e7d2a9-093c-4495-81ab-99972c72b179\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.710261 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e7d2a9-093c-4495-81ab-99972c72b179-logs\") pod \"08e7d2a9-093c-4495-81ab-99972c72b179\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.710371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-combined-ca-bundle\") pod \"08e7d2a9-093c-4495-81ab-99972c72b179\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.710389 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-config-data\") pod \"08e7d2a9-093c-4495-81ab-99972c72b179\" (UID: \"08e7d2a9-093c-4495-81ab-99972c72b179\") " Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.711121 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e7d2a9-093c-4495-81ab-99972c72b179-logs" (OuterVolumeSpecName: "logs") pod "08e7d2a9-093c-4495-81ab-99972c72b179" (UID: "08e7d2a9-093c-4495-81ab-99972c72b179"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.717666 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e7d2a9-093c-4495-81ab-99972c72b179-kube-api-access-l9s8d" (OuterVolumeSpecName: "kube-api-access-l9s8d") pod "08e7d2a9-093c-4495-81ab-99972c72b179" (UID: "08e7d2a9-093c-4495-81ab-99972c72b179"). InnerVolumeSpecName "kube-api-access-l9s8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.728238 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-scripts" (OuterVolumeSpecName: "scripts") pod "08e7d2a9-093c-4495-81ab-99972c72b179" (UID: "08e7d2a9-093c-4495-81ab-99972c72b179"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.734293 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08e7d2a9-093c-4495-81ab-99972c72b179" (UID: "08e7d2a9-093c-4495-81ab-99972c72b179"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.736635 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-config-data" (OuterVolumeSpecName: "config-data") pod "08e7d2a9-093c-4495-81ab-99972c72b179" (UID: "08e7d2a9-093c-4495-81ab-99972c72b179"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.812201 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.812237 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9s8d\" (UniqueName: \"kubernetes.io/projected/08e7d2a9-093c-4495-81ab-99972c72b179-kube-api-access-l9s8d\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.812268 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08e7d2a9-093c-4495-81ab-99972c72b179-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.812282 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:19 crc kubenswrapper[4931]: I0130 06:37:19.812295 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e7d2a9-093c-4495-81ab-99972c72b179-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.237234 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-x9ngk" event={"ID":"08e7d2a9-093c-4495-81ab-99972c72b179","Type":"ContainerDied","Data":"57de75e2f05fade5baa1cc643f4af3f769856d06f6c34e1b83f87baa4389aede"} Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.237289 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57de75e2f05fade5baa1cc643f4af3f769856d06f6c34e1b83f87baa4389aede" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.237370 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-x9ngk" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.324003 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b467bdbbb-ds8j4"] Jan 30 06:37:20 crc kubenswrapper[4931]: E0130 06:37:20.324887 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e7d2a9-093c-4495-81ab-99972c72b179" containerName="placement-db-sync" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.325162 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e7d2a9-093c-4495-81ab-99972c72b179" containerName="placement-db-sync" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.325847 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e7d2a9-093c-4495-81ab-99972c72b179" containerName="placement-db-sync" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.327551 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.330161 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.330474 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rbl8r" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.330823 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.332632 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b467bdbbb-ds8j4"] Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.420821 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-combined-ca-bundle\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.421238 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-logs\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.421262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dstw\" (UniqueName: \"kubernetes.io/projected/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-kube-api-access-5dstw\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.421286 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-config-data\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.421312 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-scripts\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.527484 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-logs\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.527560 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dstw\" (UniqueName: \"kubernetes.io/projected/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-kube-api-access-5dstw\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.527592 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-config-data\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.527636 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-scripts\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.527709 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-combined-ca-bundle\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.529005 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-logs\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.535722 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-config-data\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.542588 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-combined-ca-bundle\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.547053 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-scripts\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.557456 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dstw\" (UniqueName: \"kubernetes.io/projected/6359f2c1-ac0c-4084-969e-7cff11e8b4d8-kube-api-access-5dstw\") pod \"placement-6b467bdbbb-ds8j4\" (UID: \"6359f2c1-ac0c-4084-969e-7cff11e8b4d8\") " pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:20 crc kubenswrapper[4931]: I0130 06:37:20.652072 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:21 crc kubenswrapper[4931]: I0130 06:37:21.087001 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b467bdbbb-ds8j4"] Jan 30 06:37:21 crc kubenswrapper[4931]: I0130 06:37:21.249907 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b467bdbbb-ds8j4" event={"ID":"6359f2c1-ac0c-4084-969e-7cff11e8b4d8","Type":"ContainerStarted","Data":"74e2dbece03674369045d767d71b925745e45fd3043851d166f34f7f4e62abbe"} Jan 30 06:37:22 crc kubenswrapper[4931]: I0130 06:37:22.263917 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b467bdbbb-ds8j4" event={"ID":"6359f2c1-ac0c-4084-969e-7cff11e8b4d8","Type":"ContainerStarted","Data":"46dc669fdce1036d06ad44630a282a5294d2d76e0a512ad39f056a4c1e78453f"} Jan 30 06:37:22 crc kubenswrapper[4931]: I0130 06:37:22.264598 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b467bdbbb-ds8j4" event={"ID":"6359f2c1-ac0c-4084-969e-7cff11e8b4d8","Type":"ContainerStarted","Data":"1d8c77779be70b77d3eeed3d42eed85f62d4217b5baa219074ac9acb30a622a3"} Jan 30 06:37:22 crc kubenswrapper[4931]: I0130 06:37:22.264643 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:22 crc kubenswrapper[4931]: I0130 06:37:22.264667 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:22 crc kubenswrapper[4931]: I0130 06:37:22.293249 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b467bdbbb-ds8j4" podStartSLOduration=2.293213719 podStartE2EDuration="2.293213719s" podCreationTimestamp="2026-01-30 06:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:37:22.289087823 +0000 UTC m=+5377.658998090" watchObservedRunningTime="2026-01-30 06:37:22.293213719 +0000 UTC m=+5377.663124016" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.073747 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.163630 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf7fddbc7-982b5"] Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.163920 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" podUID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerName="dnsmasq-dns" containerID="cri-o://64b8f4307220d8bf205a24561416d6dc5c9e47a88b8a78a79bc73aae4b6035c4" gracePeriod=10 Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.297663 4931 generic.go:334] "Generic (PLEG): container finished" podID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerID="64b8f4307220d8bf205a24561416d6dc5c9e47a88b8a78a79bc73aae4b6035c4" exitCode=0 Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.297710 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" event={"ID":"e7f7543a-72f1-4937-95d9-8869b77ab81d","Type":"ContainerDied","Data":"64b8f4307220d8bf205a24561416d6dc5c9e47a88b8a78a79bc73aae4b6035c4"} Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.664312 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.759143 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-nb\") pod \"e7f7543a-72f1-4937-95d9-8869b77ab81d\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.759229 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-config\") pod \"e7f7543a-72f1-4937-95d9-8869b77ab81d\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.759252 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-dns-svc\") pod \"e7f7543a-72f1-4937-95d9-8869b77ab81d\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.759325 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-sb\") pod \"e7f7543a-72f1-4937-95d9-8869b77ab81d\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.759502 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6k6k\" (UniqueName: \"kubernetes.io/projected/e7f7543a-72f1-4937-95d9-8869b77ab81d-kube-api-access-s6k6k\") pod \"e7f7543a-72f1-4937-95d9-8869b77ab81d\" (UID: \"e7f7543a-72f1-4937-95d9-8869b77ab81d\") " Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.771122 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f7543a-72f1-4937-95d9-8869b77ab81d-kube-api-access-s6k6k" (OuterVolumeSpecName: "kube-api-access-s6k6k") pod "e7f7543a-72f1-4937-95d9-8869b77ab81d" (UID: "e7f7543a-72f1-4937-95d9-8869b77ab81d"). InnerVolumeSpecName "kube-api-access-s6k6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.811561 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-config" (OuterVolumeSpecName: "config") pod "e7f7543a-72f1-4937-95d9-8869b77ab81d" (UID: "e7f7543a-72f1-4937-95d9-8869b77ab81d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.811745 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7f7543a-72f1-4937-95d9-8869b77ab81d" (UID: "e7f7543a-72f1-4937-95d9-8869b77ab81d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.812929 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7f7543a-72f1-4937-95d9-8869b77ab81d" (UID: "e7f7543a-72f1-4937-95d9-8869b77ab81d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.817313 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7f7543a-72f1-4937-95d9-8869b77ab81d" (UID: "e7f7543a-72f1-4937-95d9-8869b77ab81d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.860754 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.860784 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.860794 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.860802 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7f7543a-72f1-4937-95d9-8869b77ab81d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:26 crc kubenswrapper[4931]: I0130 06:37:26.860812 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6k6k\" (UniqueName: \"kubernetes.io/projected/e7f7543a-72f1-4937-95d9-8869b77ab81d-kube-api-access-s6k6k\") on node \"crc\" DevicePath \"\"" Jan 30 06:37:27 crc kubenswrapper[4931]: I0130 06:37:27.311964 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" event={"ID":"e7f7543a-72f1-4937-95d9-8869b77ab81d","Type":"ContainerDied","Data":"424c4112ef530e297d0cff0d4af771a05512506e55897ed954c10a6043fe7171"} Jan 30 06:37:27 crc kubenswrapper[4931]: I0130 06:37:27.312043 4931 scope.go:117] "RemoveContainer" containerID="64b8f4307220d8bf205a24561416d6dc5c9e47a88b8a78a79bc73aae4b6035c4" Jan 30 06:37:27 crc kubenswrapper[4931]: I0130 06:37:27.312078 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf7fddbc7-982b5" Jan 30 06:37:27 crc kubenswrapper[4931]: I0130 06:37:27.334681 4931 scope.go:117] "RemoveContainer" containerID="7119d5284982674648c0826e4626a711e51ca2133b6b1310bb2a6ca06e64c6b3" Jan 30 06:37:27 crc kubenswrapper[4931]: I0130 06:37:27.376780 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf7fddbc7-982b5"] Jan 30 06:37:27 crc kubenswrapper[4931]: I0130 06:37:27.389278 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cf7fddbc7-982b5"] Jan 30 06:37:27 crc kubenswrapper[4931]: I0130 06:37:27.437953 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f7543a-72f1-4937-95d9-8869b77ab81d" path="/var/lib/kubelet/pods/e7f7543a-72f1-4937-95d9-8869b77ab81d/volumes" Jan 30 06:37:51 crc kubenswrapper[4931]: I0130 06:37:51.622679 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:51 crc kubenswrapper[4931]: I0130 06:37:51.623358 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b467bdbbb-ds8j4" Jan 30 06:37:57 crc kubenswrapper[4931]: I0130 06:37:57.363552 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:37:57 crc kubenswrapper[4931]: I0130 06:37:57.363933 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.533221 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vm2gb"] Jan 30 06:38:13 crc kubenswrapper[4931]: E0130 06:38:13.534454 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerName="dnsmasq-dns" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.534481 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerName="dnsmasq-dns" Jan 30 06:38:13 crc kubenswrapper[4931]: E0130 06:38:13.534508 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerName="init" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.534519 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerName="init" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.534782 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f7543a-72f1-4937-95d9-8869b77ab81d" containerName="dnsmasq-dns" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.535555 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.553547 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vm2gb"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.615219 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5xpsl"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.617908 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.623001 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5xpsl"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.704377 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-operator-scripts\") pod \"nova-api-db-create-vm2gb\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.704499 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk4vh\" (UniqueName: \"kubernetes.io/projected/ede2117e-e3d5-46f6-8a54-1cd987370470-kube-api-access-hk4vh\") pod \"nova-cell0-db-create-5xpsl\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.704642 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede2117e-e3d5-46f6-8a54-1cd987370470-operator-scripts\") pod \"nova-cell0-db-create-5xpsl\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.704721 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqz62\" (UniqueName: \"kubernetes.io/projected/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-kube-api-access-rqz62\") pod \"nova-api-db-create-vm2gb\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.731271 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c3ab-account-create-update-6wqgk"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.732346 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.736400 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.736852 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c3ab-account-create-update-6wqgk"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.746328 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7dkqh"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.747318 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.754279 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7dkqh"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.806569 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede2117e-e3d5-46f6-8a54-1cd987370470-operator-scripts\") pod \"nova-cell0-db-create-5xpsl\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.806641 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqz62\" (UniqueName: \"kubernetes.io/projected/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-kube-api-access-rqz62\") pod \"nova-api-db-create-vm2gb\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.806682 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-operator-scripts\") pod \"nova-api-db-create-vm2gb\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.806713 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk4vh\" (UniqueName: \"kubernetes.io/projected/ede2117e-e3d5-46f6-8a54-1cd987370470-kube-api-access-hk4vh\") pod \"nova-cell0-db-create-5xpsl\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.807742 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede2117e-e3d5-46f6-8a54-1cd987370470-operator-scripts\") pod \"nova-cell0-db-create-5xpsl\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.808344 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-operator-scripts\") pod \"nova-api-db-create-vm2gb\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.822934 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk4vh\" (UniqueName: \"kubernetes.io/projected/ede2117e-e3d5-46f6-8a54-1cd987370470-kube-api-access-hk4vh\") pod \"nova-cell0-db-create-5xpsl\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.825195 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqz62\" (UniqueName: \"kubernetes.io/projected/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-kube-api-access-rqz62\") pod \"nova-api-db-create-vm2gb\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.866038 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.915126 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9932-account-create-update-6qlx2"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.917646 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730d8243-e8f1-4b7a-b012-d65ff132d427-operator-scripts\") pod \"nova-api-c3ab-account-create-update-6wqgk\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.917953 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jrp\" (UniqueName: \"kubernetes.io/projected/0d2ac10a-2179-4d51-b7e8-31ac3621d798-kube-api-access-k9jrp\") pod \"nova-cell1-db-create-7dkqh\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.918012 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d2ac10a-2179-4d51-b7e8-31ac3621d798-operator-scripts\") pod \"nova-cell1-db-create-7dkqh\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.918058 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zhz\" (UniqueName: \"kubernetes.io/projected/730d8243-e8f1-4b7a-b012-d65ff132d427-kube-api-access-l6zhz\") pod \"nova-api-c3ab-account-create-update-6wqgk\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.919491 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.928241 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.929938 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9932-account-create-update-6qlx2"] Jan 30 06:38:13 crc kubenswrapper[4931]: I0130 06:38:13.934021 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.022950 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730d8243-e8f1-4b7a-b012-d65ff132d427-operator-scripts\") pod \"nova-api-c3ab-account-create-update-6wqgk\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.023258 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9jrp\" (UniqueName: \"kubernetes.io/projected/0d2ac10a-2179-4d51-b7e8-31ac3621d798-kube-api-access-k9jrp\") pod \"nova-cell1-db-create-7dkqh\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.023286 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d2ac10a-2179-4d51-b7e8-31ac3621d798-operator-scripts\") pod \"nova-cell1-db-create-7dkqh\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.024133 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730d8243-e8f1-4b7a-b012-d65ff132d427-operator-scripts\") pod \"nova-api-c3ab-account-create-update-6wqgk\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.024133 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d2ac10a-2179-4d51-b7e8-31ac3621d798-operator-scripts\") pod \"nova-cell1-db-create-7dkqh\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.024134 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8gzc\" (UniqueName: \"kubernetes.io/projected/0c2a0233-04c5-4382-948d-809c1216b075-kube-api-access-t8gzc\") pod \"nova-cell0-9932-account-create-update-6qlx2\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.024274 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zhz\" (UniqueName: \"kubernetes.io/projected/730d8243-e8f1-4b7a-b012-d65ff132d427-kube-api-access-l6zhz\") pod \"nova-api-c3ab-account-create-update-6wqgk\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.025455 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2a0233-04c5-4382-948d-809c1216b075-operator-scripts\") pod \"nova-cell0-9932-account-create-update-6qlx2\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.044684 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zhz\" (UniqueName: \"kubernetes.io/projected/730d8243-e8f1-4b7a-b012-d65ff132d427-kube-api-access-l6zhz\") pod \"nova-api-c3ab-account-create-update-6wqgk\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.049196 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9jrp\" (UniqueName: \"kubernetes.io/projected/0d2ac10a-2179-4d51-b7e8-31ac3621d798-kube-api-access-k9jrp\") pod \"nova-cell1-db-create-7dkqh\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.065746 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.078191 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.126716 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8gzc\" (UniqueName: \"kubernetes.io/projected/0c2a0233-04c5-4382-948d-809c1216b075-kube-api-access-t8gzc\") pod \"nova-cell0-9932-account-create-update-6qlx2\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.126812 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2a0233-04c5-4382-948d-809c1216b075-operator-scripts\") pod \"nova-cell0-9932-account-create-update-6qlx2\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.127559 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2a0233-04c5-4382-948d-809c1216b075-operator-scripts\") pod \"nova-cell0-9932-account-create-update-6qlx2\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.134349 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-184a-account-create-update-b6t5s"] Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.135506 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.137285 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.148264 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8gzc\" (UniqueName: \"kubernetes.io/projected/0c2a0233-04c5-4382-948d-809c1216b075-kube-api-access-t8gzc\") pod \"nova-cell0-9932-account-create-update-6qlx2\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.155538 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-184a-account-create-update-b6t5s"] Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.324661 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.329158 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a06db3-c381-45ef-883d-ee7393822e5a-operator-scripts\") pod \"nova-cell1-184a-account-create-update-b6t5s\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.329195 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snn26\" (UniqueName: \"kubernetes.io/projected/c7a06db3-c381-45ef-883d-ee7393822e5a-kube-api-access-snn26\") pod \"nova-cell1-184a-account-create-update-b6t5s\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.391718 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vm2gb"] Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.431087 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a06db3-c381-45ef-883d-ee7393822e5a-operator-scripts\") pod \"nova-cell1-184a-account-create-update-b6t5s\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.431130 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snn26\" (UniqueName: \"kubernetes.io/projected/c7a06db3-c381-45ef-883d-ee7393822e5a-kube-api-access-snn26\") pod \"nova-cell1-184a-account-create-update-b6t5s\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.432230 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a06db3-c381-45ef-883d-ee7393822e5a-operator-scripts\") pod \"nova-cell1-184a-account-create-update-b6t5s\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.459306 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snn26\" (UniqueName: \"kubernetes.io/projected/c7a06db3-c381-45ef-883d-ee7393822e5a-kube-api-access-snn26\") pod \"nova-cell1-184a-account-create-update-b6t5s\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.506693 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5xpsl"] Jan 30 06:38:14 crc kubenswrapper[4931]: W0130 06:38:14.516910 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podede2117e_e3d5_46f6_8a54_1cd987370470.slice/crio-5b80f05329cca2897dfa70c9bdc0e4c4f460e556ac74de0d1d1d646a96c57a30 WatchSource:0}: Error finding container 5b80f05329cca2897dfa70c9bdc0e4c4f460e556ac74de0d1d1d646a96c57a30: Status 404 returned error can't find the container with id 5b80f05329cca2897dfa70c9bdc0e4c4f460e556ac74de0d1d1d646a96c57a30 Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.631315 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7dkqh"] Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.639149 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c3ab-account-create-update-6wqgk"] Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.757745 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.793539 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9932-account-create-update-6qlx2"] Jan 30 06:38:14 crc kubenswrapper[4931]: W0130 06:38:14.800238 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c2a0233_04c5_4382_948d_809c1216b075.slice/crio-af5a106da150f0bb17662ab7ab3e10a8f540d31cc16fca47df9962bdb4fded43 WatchSource:0}: Error finding container af5a106da150f0bb17662ab7ab3e10a8f540d31cc16fca47df9962bdb4fded43: Status 404 returned error can't find the container with id af5a106da150f0bb17662ab7ab3e10a8f540d31cc16fca47df9962bdb4fded43 Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.829881 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5xpsl" event={"ID":"ede2117e-e3d5-46f6-8a54-1cd987370470","Type":"ContainerStarted","Data":"3e425706ce28b09c3d63a84e9658921cb423491b485b43961ce5f36589fbc46a"} Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.829929 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5xpsl" event={"ID":"ede2117e-e3d5-46f6-8a54-1cd987370470","Type":"ContainerStarted","Data":"5b80f05329cca2897dfa70c9bdc0e4c4f460e556ac74de0d1d1d646a96c57a30"} Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.831768 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c3ab-account-create-update-6wqgk" event={"ID":"730d8243-e8f1-4b7a-b012-d65ff132d427","Type":"ContainerStarted","Data":"8b48030bead5b8810494ecca940f3ce0fe837fa405d9830629e0021c23f80e05"} Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.848728 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vm2gb" event={"ID":"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4","Type":"ContainerStarted","Data":"e6ba02a25b1e0ee2f6ab68e84a5695ffab01e5de9788f62975d6f6c203f6437a"} Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.848811 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vm2gb" event={"ID":"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4","Type":"ContainerStarted","Data":"a98150e0e31af136066d55cd6aac76cfe239ea6044527ea73c361e8d1c5d2a0e"} Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.851880 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7dkqh" event={"ID":"0d2ac10a-2179-4d51-b7e8-31ac3621d798","Type":"ContainerStarted","Data":"27e13b6b085d0725e59ac8bc7078474da43dc2cdd83ba76b6820d4ffc9d8594f"} Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.851935 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7dkqh" event={"ID":"0d2ac10a-2179-4d51-b7e8-31ac3621d798","Type":"ContainerStarted","Data":"e7421e16eac378e7a0dea1f9d24b088f5207c980e51a7f3d6f384ea981d58f88"} Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.853347 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-5xpsl" podStartSLOduration=1.853333329 podStartE2EDuration="1.853333329s" podCreationTimestamp="2026-01-30 06:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:14.848244046 +0000 UTC m=+5430.218154313" watchObservedRunningTime="2026-01-30 06:38:14.853333329 +0000 UTC m=+5430.223243586" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.868005 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vm2gb" podStartSLOduration=1.867986749 podStartE2EDuration="1.867986749s" podCreationTimestamp="2026-01-30 06:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:14.865056147 +0000 UTC m=+5430.234966404" watchObservedRunningTime="2026-01-30 06:38:14.867986749 +0000 UTC m=+5430.237897006" Jan 30 06:38:14 crc kubenswrapper[4931]: I0130 06:38:14.897159 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-7dkqh" podStartSLOduration=1.897120284 podStartE2EDuration="1.897120284s" podCreationTimestamp="2026-01-30 06:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:14.882960878 +0000 UTC m=+5430.252871135" watchObservedRunningTime="2026-01-30 06:38:14.897120284 +0000 UTC m=+5430.267030551" Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.289900 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-184a-account-create-update-b6t5s"] Jan 30 06:38:15 crc kubenswrapper[4931]: W0130 06:38:15.299833 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7a06db3_c381_45ef_883d_ee7393822e5a.slice/crio-be0cce4914a349c03bb254b8aec6996628db2bb168ecf5eee1fd286e7266cb60 WatchSource:0}: Error finding container be0cce4914a349c03bb254b8aec6996628db2bb168ecf5eee1fd286e7266cb60: Status 404 returned error can't find the container with id be0cce4914a349c03bb254b8aec6996628db2bb168ecf5eee1fd286e7266cb60 Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.865506 4931 generic.go:334] "Generic (PLEG): container finished" podID="5d595bdd-ffa6-4292-b4c2-1eba0736a6a4" containerID="e6ba02a25b1e0ee2f6ab68e84a5695ffab01e5de9788f62975d6f6c203f6437a" exitCode=0 Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.865695 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vm2gb" event={"ID":"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4","Type":"ContainerDied","Data":"e6ba02a25b1e0ee2f6ab68e84a5695ffab01e5de9788f62975d6f6c203f6437a"} Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.869535 4931 generic.go:334] "Generic (PLEG): container finished" podID="0d2ac10a-2179-4d51-b7e8-31ac3621d798" containerID="27e13b6b085d0725e59ac8bc7078474da43dc2cdd83ba76b6820d4ffc9d8594f" exitCode=0 Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.869747 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7dkqh" event={"ID":"0d2ac10a-2179-4d51-b7e8-31ac3621d798","Type":"ContainerDied","Data":"27e13b6b085d0725e59ac8bc7078474da43dc2cdd83ba76b6820d4ffc9d8594f"} Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.876189 4931 generic.go:334] "Generic (PLEG): container finished" podID="c7a06db3-c381-45ef-883d-ee7393822e5a" containerID="5ecad2bc0e017bf1777e069b0d51cce5fdf83ffb212486a161ec83e5ab28a776" exitCode=0 Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.876268 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-184a-account-create-update-b6t5s" event={"ID":"c7a06db3-c381-45ef-883d-ee7393822e5a","Type":"ContainerDied","Data":"5ecad2bc0e017bf1777e069b0d51cce5fdf83ffb212486a161ec83e5ab28a776"} Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.876298 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-184a-account-create-update-b6t5s" event={"ID":"c7a06db3-c381-45ef-883d-ee7393822e5a","Type":"ContainerStarted","Data":"be0cce4914a349c03bb254b8aec6996628db2bb168ecf5eee1fd286e7266cb60"} Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.885585 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5xpsl" event={"ID":"ede2117e-e3d5-46f6-8a54-1cd987370470","Type":"ContainerDied","Data":"3e425706ce28b09c3d63a84e9658921cb423491b485b43961ce5f36589fbc46a"} Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.887747 4931 generic.go:334] "Generic (PLEG): container finished" podID="ede2117e-e3d5-46f6-8a54-1cd987370470" containerID="3e425706ce28b09c3d63a84e9658921cb423491b485b43961ce5f36589fbc46a" exitCode=0 Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.891475 4931 generic.go:334] "Generic (PLEG): container finished" podID="730d8243-e8f1-4b7a-b012-d65ff132d427" containerID="6d585a871e0bcc02099ca7b0fec64c91cc44765f6bff9850b839ed74e64354fb" exitCode=0 Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.891552 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c3ab-account-create-update-6wqgk" event={"ID":"730d8243-e8f1-4b7a-b012-d65ff132d427","Type":"ContainerDied","Data":"6d585a871e0bcc02099ca7b0fec64c91cc44765f6bff9850b839ed74e64354fb"} Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.893935 4931 generic.go:334] "Generic (PLEG): container finished" podID="0c2a0233-04c5-4382-948d-809c1216b075" containerID="982580309a618618acf59d7ed62dffc9baa63654e107e79e17f31ae5e09b9d10" exitCode=0 Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.893986 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9932-account-create-update-6qlx2" event={"ID":"0c2a0233-04c5-4382-948d-809c1216b075","Type":"ContainerDied","Data":"982580309a618618acf59d7ed62dffc9baa63654e107e79e17f31ae5e09b9d10"} Jan 30 06:38:15 crc kubenswrapper[4931]: I0130 06:38:15.894047 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9932-account-create-update-6qlx2" event={"ID":"0c2a0233-04c5-4382-948d-809c1216b075","Type":"ContainerStarted","Data":"af5a106da150f0bb17662ab7ab3e10a8f540d31cc16fca47df9962bdb4fded43"} Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.283398 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.390750 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk4vh\" (UniqueName: \"kubernetes.io/projected/ede2117e-e3d5-46f6-8a54-1cd987370470-kube-api-access-hk4vh\") pod \"ede2117e-e3d5-46f6-8a54-1cd987370470\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.390814 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede2117e-e3d5-46f6-8a54-1cd987370470-operator-scripts\") pod \"ede2117e-e3d5-46f6-8a54-1cd987370470\" (UID: \"ede2117e-e3d5-46f6-8a54-1cd987370470\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.391776 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede2117e-e3d5-46f6-8a54-1cd987370470-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ede2117e-e3d5-46f6-8a54-1cd987370470" (UID: "ede2117e-e3d5-46f6-8a54-1cd987370470"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.412290 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede2117e-e3d5-46f6-8a54-1cd987370470-kube-api-access-hk4vh" (OuterVolumeSpecName: "kube-api-access-hk4vh") pod "ede2117e-e3d5-46f6-8a54-1cd987370470" (UID: "ede2117e-e3d5-46f6-8a54-1cd987370470"). InnerVolumeSpecName "kube-api-access-hk4vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.492515 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk4vh\" (UniqueName: \"kubernetes.io/projected/ede2117e-e3d5-46f6-8a54-1cd987370470-kube-api-access-hk4vh\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.492558 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ede2117e-e3d5-46f6-8a54-1cd987370470-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.575338 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.581538 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.597444 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.602175 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.622948 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695014 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a06db3-c381-45ef-883d-ee7393822e5a-operator-scripts\") pod \"c7a06db3-c381-45ef-883d-ee7393822e5a\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695066 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2a0233-04c5-4382-948d-809c1216b075-operator-scripts\") pod \"0c2a0233-04c5-4382-948d-809c1216b075\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695122 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zhz\" (UniqueName: \"kubernetes.io/projected/730d8243-e8f1-4b7a-b012-d65ff132d427-kube-api-access-l6zhz\") pod \"730d8243-e8f1-4b7a-b012-d65ff132d427\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695158 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730d8243-e8f1-4b7a-b012-d65ff132d427-operator-scripts\") pod \"730d8243-e8f1-4b7a-b012-d65ff132d427\" (UID: \"730d8243-e8f1-4b7a-b012-d65ff132d427\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695184 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8gzc\" (UniqueName: \"kubernetes.io/projected/0c2a0233-04c5-4382-948d-809c1216b075-kube-api-access-t8gzc\") pod \"0c2a0233-04c5-4382-948d-809c1216b075\" (UID: \"0c2a0233-04c5-4382-948d-809c1216b075\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695244 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqz62\" (UniqueName: \"kubernetes.io/projected/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-kube-api-access-rqz62\") pod \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695260 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-operator-scripts\") pod \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\" (UID: \"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695293 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d2ac10a-2179-4d51-b7e8-31ac3621d798-operator-scripts\") pod \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695328 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snn26\" (UniqueName: \"kubernetes.io/projected/c7a06db3-c381-45ef-883d-ee7393822e5a-kube-api-access-snn26\") pod \"c7a06db3-c381-45ef-883d-ee7393822e5a\" (UID: \"c7a06db3-c381-45ef-883d-ee7393822e5a\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.695353 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9jrp\" (UniqueName: \"kubernetes.io/projected/0d2ac10a-2179-4d51-b7e8-31ac3621d798-kube-api-access-k9jrp\") pod \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\" (UID: \"0d2ac10a-2179-4d51-b7e8-31ac3621d798\") " Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.696480 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a06db3-c381-45ef-883d-ee7393822e5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7a06db3-c381-45ef-883d-ee7393822e5a" (UID: "c7a06db3-c381-45ef-883d-ee7393822e5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.696886 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d595bdd-ffa6-4292-b4c2-1eba0736a6a4" (UID: "5d595bdd-ffa6-4292-b4c2-1eba0736a6a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.696964 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2ac10a-2179-4d51-b7e8-31ac3621d798-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d2ac10a-2179-4d51-b7e8-31ac3621d798" (UID: "0d2ac10a-2179-4d51-b7e8-31ac3621d798"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.697729 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/730d8243-e8f1-4b7a-b012-d65ff132d427-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "730d8243-e8f1-4b7a-b012-d65ff132d427" (UID: "730d8243-e8f1-4b7a-b012-d65ff132d427"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.697850 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c2a0233-04c5-4382-948d-809c1216b075-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c2a0233-04c5-4382-948d-809c1216b075" (UID: "0c2a0233-04c5-4382-948d-809c1216b075"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.699298 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2a0233-04c5-4382-948d-809c1216b075-kube-api-access-t8gzc" (OuterVolumeSpecName: "kube-api-access-t8gzc") pod "0c2a0233-04c5-4382-948d-809c1216b075" (UID: "0c2a0233-04c5-4382-948d-809c1216b075"). InnerVolumeSpecName "kube-api-access-t8gzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.699494 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-kube-api-access-rqz62" (OuterVolumeSpecName: "kube-api-access-rqz62") pod "5d595bdd-ffa6-4292-b4c2-1eba0736a6a4" (UID: "5d595bdd-ffa6-4292-b4c2-1eba0736a6a4"). InnerVolumeSpecName "kube-api-access-rqz62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.699637 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730d8243-e8f1-4b7a-b012-d65ff132d427-kube-api-access-l6zhz" (OuterVolumeSpecName: "kube-api-access-l6zhz") pod "730d8243-e8f1-4b7a-b012-d65ff132d427" (UID: "730d8243-e8f1-4b7a-b012-d65ff132d427"). InnerVolumeSpecName "kube-api-access-l6zhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.699727 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a06db3-c381-45ef-883d-ee7393822e5a-kube-api-access-snn26" (OuterVolumeSpecName: "kube-api-access-snn26") pod "c7a06db3-c381-45ef-883d-ee7393822e5a" (UID: "c7a06db3-c381-45ef-883d-ee7393822e5a"). InnerVolumeSpecName "kube-api-access-snn26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.702139 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2ac10a-2179-4d51-b7e8-31ac3621d798-kube-api-access-k9jrp" (OuterVolumeSpecName: "kube-api-access-k9jrp") pod "0d2ac10a-2179-4d51-b7e8-31ac3621d798" (UID: "0d2ac10a-2179-4d51-b7e8-31ac3621d798"). InnerVolumeSpecName "kube-api-access-k9jrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.796991 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9jrp\" (UniqueName: \"kubernetes.io/projected/0d2ac10a-2179-4d51-b7e8-31ac3621d798-kube-api-access-k9jrp\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797033 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a06db3-c381-45ef-883d-ee7393822e5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797042 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c2a0233-04c5-4382-948d-809c1216b075-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797051 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zhz\" (UniqueName: \"kubernetes.io/projected/730d8243-e8f1-4b7a-b012-d65ff132d427-kube-api-access-l6zhz\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797060 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730d8243-e8f1-4b7a-b012-d65ff132d427-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797068 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8gzc\" (UniqueName: \"kubernetes.io/projected/0c2a0233-04c5-4382-948d-809c1216b075-kube-api-access-t8gzc\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797077 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqz62\" (UniqueName: \"kubernetes.io/projected/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-kube-api-access-rqz62\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797085 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797093 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d2ac10a-2179-4d51-b7e8-31ac3621d798-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.797100 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snn26\" (UniqueName: \"kubernetes.io/projected/c7a06db3-c381-45ef-883d-ee7393822e5a-kube-api-access-snn26\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.922924 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5xpsl" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.923233 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5xpsl" event={"ID":"ede2117e-e3d5-46f6-8a54-1cd987370470","Type":"ContainerDied","Data":"5b80f05329cca2897dfa70c9bdc0e4c4f460e556ac74de0d1d1d646a96c57a30"} Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.923274 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b80f05329cca2897dfa70c9bdc0e4c4f460e556ac74de0d1d1d646a96c57a30" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.924899 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c3ab-account-create-update-6wqgk" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.924908 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c3ab-account-create-update-6wqgk" event={"ID":"730d8243-e8f1-4b7a-b012-d65ff132d427","Type":"ContainerDied","Data":"8b48030bead5b8810494ecca940f3ce0fe837fa405d9830629e0021c23f80e05"} Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.924942 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b48030bead5b8810494ecca940f3ce0fe837fa405d9830629e0021c23f80e05" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.926852 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9932-account-create-update-6qlx2" event={"ID":"0c2a0233-04c5-4382-948d-809c1216b075","Type":"ContainerDied","Data":"af5a106da150f0bb17662ab7ab3e10a8f540d31cc16fca47df9962bdb4fded43"} Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.926876 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5a106da150f0bb17662ab7ab3e10a8f540d31cc16fca47df9962bdb4fded43" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.926883 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9932-account-create-update-6qlx2" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.928308 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vm2gb" event={"ID":"5d595bdd-ffa6-4292-b4c2-1eba0736a6a4","Type":"ContainerDied","Data":"a98150e0e31af136066d55cd6aac76cfe239ea6044527ea73c361e8d1c5d2a0e"} Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.928343 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a98150e0e31af136066d55cd6aac76cfe239ea6044527ea73c361e8d1c5d2a0e" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.928435 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vm2gb" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.929882 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7dkqh" event={"ID":"0d2ac10a-2179-4d51-b7e8-31ac3621d798","Type":"ContainerDied","Data":"e7421e16eac378e7a0dea1f9d24b088f5207c980e51a7f3d6f384ea981d58f88"} Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.929903 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7421e16eac378e7a0dea1f9d24b088f5207c980e51a7f3d6f384ea981d58f88" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.929965 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dkqh" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.938203 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-184a-account-create-update-b6t5s" event={"ID":"c7a06db3-c381-45ef-883d-ee7393822e5a","Type":"ContainerDied","Data":"be0cce4914a349c03bb254b8aec6996628db2bb168ecf5eee1fd286e7266cb60"} Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.938239 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be0cce4914a349c03bb254b8aec6996628db2bb168ecf5eee1fd286e7266cb60" Jan 30 06:38:17 crc kubenswrapper[4931]: I0130 06:38:17.938328 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-184a-account-create-update-b6t5s" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.153935 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thknd"] Jan 30 06:38:19 crc kubenswrapper[4931]: E0130 06:38:19.154359 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730d8243-e8f1-4b7a-b012-d65ff132d427" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154376 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="730d8243-e8f1-4b7a-b012-d65ff132d427" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: E0130 06:38:19.154396 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2a0233-04c5-4382-948d-809c1216b075" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154405 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2a0233-04c5-4382-948d-809c1216b075" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: E0130 06:38:19.154419 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d595bdd-ffa6-4292-b4c2-1eba0736a6a4" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154443 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d595bdd-ffa6-4292-b4c2-1eba0736a6a4" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: E0130 06:38:19.154460 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede2117e-e3d5-46f6-8a54-1cd987370470" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154470 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede2117e-e3d5-46f6-8a54-1cd987370470" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: E0130 06:38:19.154487 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a06db3-c381-45ef-883d-ee7393822e5a" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154496 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a06db3-c381-45ef-883d-ee7393822e5a" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: E0130 06:38:19.154516 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2ac10a-2179-4d51-b7e8-31ac3621d798" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154524 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2ac10a-2179-4d51-b7e8-31ac3621d798" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154720 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede2117e-e3d5-46f6-8a54-1cd987370470" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154742 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="730d8243-e8f1-4b7a-b012-d65ff132d427" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154753 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2ac10a-2179-4d51-b7e8-31ac3621d798" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154765 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a06db3-c381-45ef-883d-ee7393822e5a" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154777 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2a0233-04c5-4382-948d-809c1216b075" containerName="mariadb-account-create-update" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.154788 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d595bdd-ffa6-4292-b4c2-1eba0736a6a4" containerName="mariadb-database-create" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.155517 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.158260 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.158918 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4nvdz" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.159386 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.177355 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thknd"] Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.226404 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbks\" (UniqueName: \"kubernetes.io/projected/693a2e91-1503-4caa-a71d-4f65d99a913c-kube-api-access-ggbks\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.226515 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-scripts\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.226544 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-config-data\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.226589 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.328143 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.328569 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbks\" (UniqueName: \"kubernetes.io/projected/693a2e91-1503-4caa-a71d-4f65d99a913c-kube-api-access-ggbks\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.328674 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-scripts\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.328712 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-config-data\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.331681 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.332170 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-scripts\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.332489 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-config-data\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.343585 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbks\" (UniqueName: \"kubernetes.io/projected/693a2e91-1503-4caa-a71d-4f65d99a913c-kube-api-access-ggbks\") pod \"nova-cell0-conductor-db-sync-thknd\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.481452 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.744299 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thknd"] Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.962536 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thknd" event={"ID":"693a2e91-1503-4caa-a71d-4f65d99a913c","Type":"ContainerStarted","Data":"0945601f9dc541b7489d17e996b29ddbb60ba07b0e0dec353dfa850db402078c"} Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.962580 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thknd" event={"ID":"693a2e91-1503-4caa-a71d-4f65d99a913c","Type":"ContainerStarted","Data":"ac62c375bd605110094f1dfe2f9000637195e690488df8000525cc79d4598be2"} Jan 30 06:38:19 crc kubenswrapper[4931]: I0130 06:38:19.981670 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-thknd" podStartSLOduration=0.981650457 podStartE2EDuration="981.650457ms" podCreationTimestamp="2026-01-30 06:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:19.974038965 +0000 UTC m=+5435.343949222" watchObservedRunningTime="2026-01-30 06:38:19.981650457 +0000 UTC m=+5435.351560714" Jan 30 06:38:25 crc kubenswrapper[4931]: I0130 06:38:25.028407 4931 generic.go:334] "Generic (PLEG): container finished" podID="693a2e91-1503-4caa-a71d-4f65d99a913c" containerID="0945601f9dc541b7489d17e996b29ddbb60ba07b0e0dec353dfa850db402078c" exitCode=0 Jan 30 06:38:25 crc kubenswrapper[4931]: I0130 06:38:25.028498 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thknd" event={"ID":"693a2e91-1503-4caa-a71d-4f65d99a913c","Type":"ContainerDied","Data":"0945601f9dc541b7489d17e996b29ddbb60ba07b0e0dec353dfa850db402078c"} Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.503714 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.604693 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-scripts\") pod \"693a2e91-1503-4caa-a71d-4f65d99a913c\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.604812 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-config-data\") pod \"693a2e91-1503-4caa-a71d-4f65d99a913c\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.604952 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-combined-ca-bundle\") pod \"693a2e91-1503-4caa-a71d-4f65d99a913c\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.605136 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggbks\" (UniqueName: \"kubernetes.io/projected/693a2e91-1503-4caa-a71d-4f65d99a913c-kube-api-access-ggbks\") pod \"693a2e91-1503-4caa-a71d-4f65d99a913c\" (UID: \"693a2e91-1503-4caa-a71d-4f65d99a913c\") " Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.611981 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/693a2e91-1503-4caa-a71d-4f65d99a913c-kube-api-access-ggbks" (OuterVolumeSpecName: "kube-api-access-ggbks") pod "693a2e91-1503-4caa-a71d-4f65d99a913c" (UID: "693a2e91-1503-4caa-a71d-4f65d99a913c"). InnerVolumeSpecName "kube-api-access-ggbks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.612073 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-scripts" (OuterVolumeSpecName: "scripts") pod "693a2e91-1503-4caa-a71d-4f65d99a913c" (UID: "693a2e91-1503-4caa-a71d-4f65d99a913c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.634685 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "693a2e91-1503-4caa-a71d-4f65d99a913c" (UID: "693a2e91-1503-4caa-a71d-4f65d99a913c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.650900 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-config-data" (OuterVolumeSpecName: "config-data") pod "693a2e91-1503-4caa-a71d-4f65d99a913c" (UID: "693a2e91-1503-4caa-a71d-4f65d99a913c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.707035 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.707075 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.707091 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/693a2e91-1503-4caa-a71d-4f65d99a913c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:26 crc kubenswrapper[4931]: I0130 06:38:26.707149 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggbks\" (UniqueName: \"kubernetes.io/projected/693a2e91-1503-4caa-a71d-4f65d99a913c-kube-api-access-ggbks\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.053225 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-thknd" event={"ID":"693a2e91-1503-4caa-a71d-4f65d99a913c","Type":"ContainerDied","Data":"ac62c375bd605110094f1dfe2f9000637195e690488df8000525cc79d4598be2"} Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.053594 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac62c375bd605110094f1dfe2f9000637195e690488df8000525cc79d4598be2" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.053324 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-thknd" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.137842 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:38:27 crc kubenswrapper[4931]: E0130 06:38:27.138240 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693a2e91-1503-4caa-a71d-4f65d99a913c" containerName="nova-cell0-conductor-db-sync" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.138258 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="693a2e91-1503-4caa-a71d-4f65d99a913c" containerName="nova-cell0-conductor-db-sync" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.138483 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="693a2e91-1503-4caa-a71d-4f65d99a913c" containerName="nova-cell0-conductor-db-sync" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.139682 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.142313 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.142881 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4nvdz" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.150847 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.216360 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.216566 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.216771 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w9bw\" (UniqueName: \"kubernetes.io/projected/2e99598c-cc27-462b-8c5b-9647fdc031dc-kube-api-access-7w9bw\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.318444 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w9bw\" (UniqueName: \"kubernetes.io/projected/2e99598c-cc27-462b-8c5b-9647fdc031dc-kube-api-access-7w9bw\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.318567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.320920 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.334009 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.334919 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.337678 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w9bw\" (UniqueName: \"kubernetes.io/projected/2e99598c-cc27-462b-8c5b-9647fdc031dc-kube-api-access-7w9bw\") pod \"nova-cell0-conductor-0\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.363763 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.363853 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.457547 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:27 crc kubenswrapper[4931]: I0130 06:38:27.970359 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:38:28 crc kubenswrapper[4931]: I0130 06:38:28.066053 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e99598c-cc27-462b-8c5b-9647fdc031dc","Type":"ContainerStarted","Data":"7484fc206458a3c7c0f0725319e96b32501a736307f2141c6d77f6213f261ff9"} Jan 30 06:38:29 crc kubenswrapper[4931]: I0130 06:38:29.081697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e99598c-cc27-462b-8c5b-9647fdc031dc","Type":"ContainerStarted","Data":"a67c2af59ce774fac5d99d16e2c4d0308297f692031a8a81e468f5bef97702ac"} Jan 30 06:38:29 crc kubenswrapper[4931]: I0130 06:38:29.082185 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:29 crc kubenswrapper[4931]: I0130 06:38:29.120024 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.119997341 podStartE2EDuration="2.119997341s" podCreationTimestamp="2026-01-30 06:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:29.107480491 +0000 UTC m=+5444.477390828" watchObservedRunningTime="2026-01-30 06:38:29.119997341 +0000 UTC m=+5444.489907638" Jan 30 06:38:37 crc kubenswrapper[4931]: I0130 06:38:37.507084 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.062345 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-49hcs"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.063994 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.069405 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.074833 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-49hcs"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.076163 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.181956 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.182012 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-scripts\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.182130 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-config-data\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.182168 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rmfn\" (UniqueName: \"kubernetes.io/projected/3f484f87-1747-491b-a6c5-dd1d51ff66af-kube-api-access-6rmfn\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.227314 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.229481 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.233638 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.261762 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.283905 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-config-data\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.283954 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rmfn\" (UniqueName: \"kubernetes.io/projected/3f484f87-1747-491b-a6c5-dd1d51ff66af-kube-api-access-6rmfn\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.283988 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-config-data\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.284015 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.284044 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzszf\" (UniqueName: \"kubernetes.io/projected/9399bfc6-7083-4978-b49d-bc46769c2b9e-kube-api-access-gzszf\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.284069 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.284094 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-scripts\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.295143 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-scripts\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.299320 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.299473 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.299685 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-config-data\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.308105 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rmfn\" (UniqueName: \"kubernetes.io/projected/3f484f87-1747-491b-a6c5-dd1d51ff66af-kube-api-access-6rmfn\") pod \"nova-cell0-cell-mapping-49hcs\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.319162 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.340256 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.349062 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.363474 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.365195 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.370393 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.375876 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.386985 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-config-data\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.387045 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.387083 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzszf\" (UniqueName: \"kubernetes.io/projected/9399bfc6-7083-4978-b49d-bc46769c2b9e-kube-api-access-gzszf\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.387121 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmssh\" (UniqueName: \"kubernetes.io/projected/dc16d452-7a63-4d86-b729-2f7384b3ea73-kube-api-access-vmssh\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.387236 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.387264 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.388628 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.413865 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-config-data\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.414765 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.426340 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzszf\" (UniqueName: \"kubernetes.io/projected/9399bfc6-7083-4978-b49d-bc46769c2b9e-kube-api-access-gzszf\") pod \"nova-scheduler-0\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.496385 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564639bd-5984-4822-80a8-c88dd5ae22da-logs\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.496489 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmssh\" (UniqueName: \"kubernetes.io/projected/dc16d452-7a63-4d86-b729-2f7384b3ea73-kube-api-access-vmssh\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.497074 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-config-data\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.497143 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vr8\" (UniqueName: \"kubernetes.io/projected/564639bd-5984-4822-80a8-c88dd5ae22da-kube-api-access-42vr8\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.497181 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.497214 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.497242 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.504597 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.534413 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.536864 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.542969 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.546082 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.558972 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmssh\" (UniqueName: \"kubernetes.io/projected/dc16d452-7a63-4d86-b729-2f7384b3ea73-kube-api-access-vmssh\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.568546 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.575843 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.585936 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67c9d4fb9c-cpnds"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.587978 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.595840 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67c9d4fb9c-cpnds"] Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.598880 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vr8\" (UniqueName: \"kubernetes.io/projected/564639bd-5984-4822-80a8-c88dd5ae22da-kube-api-access-42vr8\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.598940 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.599036 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564639bd-5984-4822-80a8-c88dd5ae22da-logs\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.599059 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.599115 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-config-data\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.599172 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-config-data\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.599187 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b405de5-9885-473d-acc2-e974d5fcdcdf-logs\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.599212 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdwb\" (UniqueName: \"kubernetes.io/projected/3b405de5-9885-473d-acc2-e974d5fcdcdf-kube-api-access-5qdwb\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.599833 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564639bd-5984-4822-80a8-c88dd5ae22da-logs\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.602964 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-config-data\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.604001 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.616663 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vr8\" (UniqueName: \"kubernetes.io/projected/564639bd-5984-4822-80a8-c88dd5ae22da-kube-api-access-42vr8\") pod \"nova-metadata-0\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701217 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-config\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701493 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-nb\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701536 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701564 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-sb\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701604 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-config-data\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701671 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b405de5-9885-473d-acc2-e974d5fcdcdf-logs\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701698 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdwb\" (UniqueName: \"kubernetes.io/projected/3b405de5-9885-473d-acc2-e974d5fcdcdf-kube-api-access-5qdwb\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701720 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-dns-svc\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.701755 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6xc\" (UniqueName: \"kubernetes.io/projected/c8aaa63b-49f3-44c6-abe3-d24692e5894e-kube-api-access-hm6xc\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.702329 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b405de5-9885-473d-acc2-e974d5fcdcdf-logs\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.705110 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.706197 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-config-data\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.719594 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdwb\" (UniqueName: \"kubernetes.io/projected/3b405de5-9885-473d-acc2-e974d5fcdcdf-kube-api-access-5qdwb\") pod \"nova-api-0\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.803396 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-sb\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.803557 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-dns-svc\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.803608 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm6xc\" (UniqueName: \"kubernetes.io/projected/c8aaa63b-49f3-44c6-abe3-d24692e5894e-kube-api-access-hm6xc\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.803647 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-config\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.803672 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-nb\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.804453 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-sb\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.805995 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-config\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.806221 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-dns-svc\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.810325 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-nb\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.824510 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.829098 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm6xc\" (UniqueName: \"kubernetes.io/projected/c8aaa63b-49f3-44c6-abe3-d24692e5894e-kube-api-access-hm6xc\") pod \"dnsmasq-dns-67c9d4fb9c-cpnds\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.844588 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.873935 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.907778 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:38 crc kubenswrapper[4931]: I0130 06:38:38.992233 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-49hcs"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.076916 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.087715 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xg8js"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.088851 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.093296 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.093372 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.095544 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xg8js"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.193555 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9399bfc6-7083-4978-b49d-bc46769c2b9e","Type":"ContainerStarted","Data":"a4fd2130796fb3af0e7960b4f61e0b3a284a488489630a1d386b0d9487a9d9c8"} Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.194625 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-49hcs" event={"ID":"3f484f87-1747-491b-a6c5-dd1d51ff66af","Type":"ContainerStarted","Data":"96ecf819b31adbd25c929467c1cd8090a82c8e42d995ce015f866e19d37cb78f"} Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.211866 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-config-data\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.211904 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-scripts\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.211985 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzr5v\" (UniqueName: \"kubernetes.io/projected/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-kube-api-access-xzr5v\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.212014 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.292525 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.314317 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzr5v\" (UniqueName: \"kubernetes.io/projected/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-kube-api-access-xzr5v\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.314360 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.314479 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-config-data\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.314499 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-scripts\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.317880 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-config-data\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.317999 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-scripts\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.319098 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.330637 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzr5v\" (UniqueName: \"kubernetes.io/projected/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-kube-api-access-xzr5v\") pod \"nova-cell1-conductor-db-sync-xg8js\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.411568 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.419060 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.432065 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.532062 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67c9d4fb9c-cpnds"] Jan 30 06:38:39 crc kubenswrapper[4931]: I0130 06:38:39.954137 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xg8js"] Jan 30 06:38:39 crc kubenswrapper[4931]: W0130 06:38:39.960618 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97ed4dbf_5bb0_45b9_bc15_763a93ba7375.slice/crio-d61fb4dbc351b26c251d3469015fc7f9033401d22cd6c8b47548cb37e7ac9418 WatchSource:0}: Error finding container d61fb4dbc351b26c251d3469015fc7f9033401d22cd6c8b47548cb37e7ac9418: Status 404 returned error can't find the container with id d61fb4dbc351b26c251d3469015fc7f9033401d22cd6c8b47548cb37e7ac9418 Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.204573 4931 generic.go:334] "Generic (PLEG): container finished" podID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerID="297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181" exitCode=0 Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.204637 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" event={"ID":"c8aaa63b-49f3-44c6-abe3-d24692e5894e","Type":"ContainerDied","Data":"297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.204662 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" event={"ID":"c8aaa63b-49f3-44c6-abe3-d24692e5894e","Type":"ContainerStarted","Data":"be21e08fe5735cb0ef573095e6460329d55a3b26fd373de9ad820520ace903ab"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.207044 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9399bfc6-7083-4978-b49d-bc46769c2b9e","Type":"ContainerStarted","Data":"813a4c2198bcec0f1cab1d8053ab7437050231fbc0dad71c25c447ce377f8fcf"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.209277 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-49hcs" event={"ID":"3f484f87-1747-491b-a6c5-dd1d51ff66af","Type":"ContainerStarted","Data":"f317a267d263377c23363a6996fcb39842543912fc8f152fbbaa9e502e107fac"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.215892 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b405de5-9885-473d-acc2-e974d5fcdcdf","Type":"ContainerStarted","Data":"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.215934 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b405de5-9885-473d-acc2-e974d5fcdcdf","Type":"ContainerStarted","Data":"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.215943 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b405de5-9885-473d-acc2-e974d5fcdcdf","Type":"ContainerStarted","Data":"8e2f7fd7e3b97e352d13a85c9cb339f64f9b7aade8260e679a2c93c61eeeff04"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.234840 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"564639bd-5984-4822-80a8-c88dd5ae22da","Type":"ContainerStarted","Data":"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.234878 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"564639bd-5984-4822-80a8-c88dd5ae22da","Type":"ContainerStarted","Data":"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.234887 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"564639bd-5984-4822-80a8-c88dd5ae22da","Type":"ContainerStarted","Data":"12fd945e445bdfd59f114b8ea4c531fff45ade762e92da96058d89f075ac3f03"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.241302 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc16d452-7a63-4d86-b729-2f7384b3ea73","Type":"ContainerStarted","Data":"781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.241340 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc16d452-7a63-4d86-b729-2f7384b3ea73","Type":"ContainerStarted","Data":"be2cb5795144a0336c26c2ce840d01e8f6b40f1134f0aca0ca6716edd8f9b6e4"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.245671 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.245651963 podStartE2EDuration="2.245651963s" podCreationTimestamp="2026-01-30 06:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:40.244166391 +0000 UTC m=+5455.614076648" watchObservedRunningTime="2026-01-30 06:38:40.245651963 +0000 UTC m=+5455.615562230" Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.248757 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xg8js" event={"ID":"97ed4dbf-5bb0-45b9-bc15-763a93ba7375","Type":"ContainerStarted","Data":"4fbc33cf72a98eaa43152775b96af49bda5487d618cfc4a531333efaa8d2b27d"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.248811 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xg8js" event={"ID":"97ed4dbf-5bb0-45b9-bc15-763a93ba7375","Type":"ContainerStarted","Data":"d61fb4dbc351b26c251d3469015fc7f9033401d22cd6c8b47548cb37e7ac9418"} Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.294111 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.294086148 podStartE2EDuration="2.294086148s" podCreationTimestamp="2026-01-30 06:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:40.283055189 +0000 UTC m=+5455.652965446" watchObservedRunningTime="2026-01-30 06:38:40.294086148 +0000 UTC m=+5455.663996405" Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.323777 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-49hcs" podStartSLOduration=2.323752118 podStartE2EDuration="2.323752118s" podCreationTimestamp="2026-01-30 06:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:40.304691654 +0000 UTC m=+5455.674601901" watchObservedRunningTime="2026-01-30 06:38:40.323752118 +0000 UTC m=+5455.693662375" Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.331101 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xg8js" podStartSLOduration=1.331076422 podStartE2EDuration="1.331076422s" podCreationTimestamp="2026-01-30 06:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:40.319517339 +0000 UTC m=+5455.689427596" watchObservedRunningTime="2026-01-30 06:38:40.331076422 +0000 UTC m=+5455.700986679" Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.394805 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.394786835 podStartE2EDuration="2.394786835s" podCreationTimestamp="2026-01-30 06:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:40.35923097 +0000 UTC m=+5455.729141247" watchObservedRunningTime="2026-01-30 06:38:40.394786835 +0000 UTC m=+5455.764697092" Jan 30 06:38:40 crc kubenswrapper[4931]: I0130 06:38:40.399691 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.399682912 podStartE2EDuration="2.399682912s" podCreationTimestamp="2026-01-30 06:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:40.344189069 +0000 UTC m=+5455.714099326" watchObservedRunningTime="2026-01-30 06:38:40.399682912 +0000 UTC m=+5455.769593169" Jan 30 06:38:41 crc kubenswrapper[4931]: I0130 06:38:41.267706 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" event={"ID":"c8aaa63b-49f3-44c6-abe3-d24692e5894e","Type":"ContainerStarted","Data":"d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69"} Jan 30 06:38:41 crc kubenswrapper[4931]: I0130 06:38:41.312933 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" podStartSLOduration=3.31289401 podStartE2EDuration="3.31289401s" podCreationTimestamp="2026-01-30 06:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:41.289254469 +0000 UTC m=+5456.659164726" watchObservedRunningTime="2026-01-30 06:38:41.31289401 +0000 UTC m=+5456.682804297" Jan 30 06:38:42 crc kubenswrapper[4931]: I0130 06:38:42.274360 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:43 crc kubenswrapper[4931]: I0130 06:38:43.291671 4931 generic.go:334] "Generic (PLEG): container finished" podID="97ed4dbf-5bb0-45b9-bc15-763a93ba7375" containerID="4fbc33cf72a98eaa43152775b96af49bda5487d618cfc4a531333efaa8d2b27d" exitCode=0 Jan 30 06:38:43 crc kubenswrapper[4931]: I0130 06:38:43.291774 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xg8js" event={"ID":"97ed4dbf-5bb0-45b9-bc15-763a93ba7375","Type":"ContainerDied","Data":"4fbc33cf72a98eaa43152775b96af49bda5487d618cfc4a531333efaa8d2b27d"} Jan 30 06:38:43 crc kubenswrapper[4931]: I0130 06:38:43.569368 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 06:38:43 crc kubenswrapper[4931]: I0130 06:38:43.824742 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:43 crc kubenswrapper[4931]: I0130 06:38:43.845351 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:38:43 crc kubenswrapper[4931]: I0130 06:38:43.845451 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.305838 4931 generic.go:334] "Generic (PLEG): container finished" podID="3f484f87-1747-491b-a6c5-dd1d51ff66af" containerID="f317a267d263377c23363a6996fcb39842543912fc8f152fbbaa9e502e107fac" exitCode=0 Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.306199 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-49hcs" event={"ID":"3f484f87-1747-491b-a6c5-dd1d51ff66af","Type":"ContainerDied","Data":"f317a267d263377c23363a6996fcb39842543912fc8f152fbbaa9e502e107fac"} Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.708496 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.868235 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzr5v\" (UniqueName: \"kubernetes.io/projected/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-kube-api-access-xzr5v\") pod \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.868294 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-config-data\") pod \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.868555 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-combined-ca-bundle\") pod \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.868607 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-scripts\") pod \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\" (UID: \"97ed4dbf-5bb0-45b9-bc15-763a93ba7375\") " Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.875079 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-kube-api-access-xzr5v" (OuterVolumeSpecName: "kube-api-access-xzr5v") pod "97ed4dbf-5bb0-45b9-bc15-763a93ba7375" (UID: "97ed4dbf-5bb0-45b9-bc15-763a93ba7375"). InnerVolumeSpecName "kube-api-access-xzr5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.875910 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-scripts" (OuterVolumeSpecName: "scripts") pod "97ed4dbf-5bb0-45b9-bc15-763a93ba7375" (UID: "97ed4dbf-5bb0-45b9-bc15-763a93ba7375"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.895311 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97ed4dbf-5bb0-45b9-bc15-763a93ba7375" (UID: "97ed4dbf-5bb0-45b9-bc15-763a93ba7375"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.921022 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-config-data" (OuterVolumeSpecName: "config-data") pod "97ed4dbf-5bb0-45b9-bc15-763a93ba7375" (UID: "97ed4dbf-5bb0-45b9-bc15-763a93ba7375"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.971651 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzr5v\" (UniqueName: \"kubernetes.io/projected/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-kube-api-access-xzr5v\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.971712 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.971734 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:44 crc kubenswrapper[4931]: I0130 06:38:44.971753 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97ed4dbf-5bb0-45b9-bc15-763a93ba7375-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.321316 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xg8js" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.321334 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xg8js" event={"ID":"97ed4dbf-5bb0-45b9-bc15-763a93ba7375","Type":"ContainerDied","Data":"d61fb4dbc351b26c251d3469015fc7f9033401d22cd6c8b47548cb37e7ac9418"} Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.321405 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d61fb4dbc351b26c251d3469015fc7f9033401d22cd6c8b47548cb37e7ac9418" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.985478 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:38:45 crc kubenswrapper[4931]: E0130 06:38:45.986162 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ed4dbf-5bb0-45b9-bc15-763a93ba7375" containerName="nova-cell1-conductor-db-sync" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.986179 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ed4dbf-5bb0-45b9-bc15-763a93ba7375" containerName="nova-cell1-conductor-db-sync" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.986398 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ed4dbf-5bb0-45b9-bc15-763a93ba7375" containerName="nova-cell1-conductor-db-sync" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.987091 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.989490 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 06:38:45 crc kubenswrapper[4931]: I0130 06:38:45.997102 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.093077 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.093215 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt4r9\" (UniqueName: \"kubernetes.io/projected/c9b04495-2e29-4188-adbe-e6ed3669c25a-kube-api-access-tt4r9\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.093508 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.200589 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.200698 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.200743 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt4r9\" (UniqueName: \"kubernetes.io/projected/c9b04495-2e29-4188-adbe-e6ed3669c25a-kube-api-access-tt4r9\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.207482 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.208862 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.214569 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.224728 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt4r9\" (UniqueName: \"kubernetes.io/projected/c9b04495-2e29-4188-adbe-e6ed3669c25a-kube-api-access-tt4r9\") pod \"nova-cell1-conductor-0\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.294069 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.314896 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.383624 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-49hcs" event={"ID":"3f484f87-1747-491b-a6c5-dd1d51ff66af","Type":"ContainerDied","Data":"96ecf819b31adbd25c929467c1cd8090a82c8e42d995ce015f866e19d37cb78f"} Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.383678 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-49hcs" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.383680 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ecf819b31adbd25c929467c1cd8090a82c8e42d995ce015f866e19d37cb78f" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.403574 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-combined-ca-bundle\") pod \"3f484f87-1747-491b-a6c5-dd1d51ff66af\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.403784 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmfn\" (UniqueName: \"kubernetes.io/projected/3f484f87-1747-491b-a6c5-dd1d51ff66af-kube-api-access-6rmfn\") pod \"3f484f87-1747-491b-a6c5-dd1d51ff66af\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.403927 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-config-data\") pod \"3f484f87-1747-491b-a6c5-dd1d51ff66af\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.404011 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-scripts\") pod \"3f484f87-1747-491b-a6c5-dd1d51ff66af\" (UID: \"3f484f87-1747-491b-a6c5-dd1d51ff66af\") " Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.412038 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f484f87-1747-491b-a6c5-dd1d51ff66af-kube-api-access-6rmfn" (OuterVolumeSpecName: "kube-api-access-6rmfn") pod "3f484f87-1747-491b-a6c5-dd1d51ff66af" (UID: "3f484f87-1747-491b-a6c5-dd1d51ff66af"). InnerVolumeSpecName "kube-api-access-6rmfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.413672 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-scripts" (OuterVolumeSpecName: "scripts") pod "3f484f87-1747-491b-a6c5-dd1d51ff66af" (UID: "3f484f87-1747-491b-a6c5-dd1d51ff66af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.426239 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f484f87-1747-491b-a6c5-dd1d51ff66af" (UID: "3f484f87-1747-491b-a6c5-dd1d51ff66af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.439056 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-config-data" (OuterVolumeSpecName: "config-data") pod "3f484f87-1747-491b-a6c5-dd1d51ff66af" (UID: "3f484f87-1747-491b-a6c5-dd1d51ff66af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.518833 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rmfn\" (UniqueName: \"kubernetes.io/projected/3f484f87-1747-491b-a6c5-dd1d51ff66af-kube-api-access-6rmfn\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.518879 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.518891 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.518907 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f484f87-1747-491b-a6c5-dd1d51ff66af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.558855 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.559152 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-log" containerID="cri-o://02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2" gracePeriod=30 Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.559456 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-api" containerID="cri-o://0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794" gracePeriod=30 Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.566307 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.566524 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9399bfc6-7083-4978-b49d-bc46769c2b9e" containerName="nova-scheduler-scheduler" containerID="cri-o://813a4c2198bcec0f1cab1d8053ab7437050231fbc0dad71c25c447ce377f8fcf" gracePeriod=30 Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.590625 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.590839 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-log" containerID="cri-o://cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95" gracePeriod=30 Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.591676 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-metadata" containerID="cri-o://1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3" gracePeriod=30 Jan 30 06:38:46 crc kubenswrapper[4931]: I0130 06:38:46.851264 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.068481 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.231033 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-combined-ca-bundle\") pod \"3b405de5-9885-473d-acc2-e974d5fcdcdf\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.231372 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qdwb\" (UniqueName: \"kubernetes.io/projected/3b405de5-9885-473d-acc2-e974d5fcdcdf-kube-api-access-5qdwb\") pod \"3b405de5-9885-473d-acc2-e974d5fcdcdf\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.231441 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-config-data\") pod \"3b405de5-9885-473d-acc2-e974d5fcdcdf\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.231640 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b405de5-9885-473d-acc2-e974d5fcdcdf-logs\") pod \"3b405de5-9885-473d-acc2-e974d5fcdcdf\" (UID: \"3b405de5-9885-473d-acc2-e974d5fcdcdf\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.232323 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b405de5-9885-473d-acc2-e974d5fcdcdf-logs" (OuterVolumeSpecName: "logs") pod "3b405de5-9885-473d-acc2-e974d5fcdcdf" (UID: "3b405de5-9885-473d-acc2-e974d5fcdcdf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.236514 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b405de5-9885-473d-acc2-e974d5fcdcdf-kube-api-access-5qdwb" (OuterVolumeSpecName: "kube-api-access-5qdwb") pod "3b405de5-9885-473d-acc2-e974d5fcdcdf" (UID: "3b405de5-9885-473d-acc2-e974d5fcdcdf"). InnerVolumeSpecName "kube-api-access-5qdwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.237676 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.257077 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b405de5-9885-473d-acc2-e974d5fcdcdf" (UID: "3b405de5-9885-473d-acc2-e974d5fcdcdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.273492 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-config-data" (OuterVolumeSpecName: "config-data") pod "3b405de5-9885-473d-acc2-e974d5fcdcdf" (UID: "3b405de5-9885-473d-acc2-e974d5fcdcdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.333331 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42vr8\" (UniqueName: \"kubernetes.io/projected/564639bd-5984-4822-80a8-c88dd5ae22da-kube-api-access-42vr8\") pod \"564639bd-5984-4822-80a8-c88dd5ae22da\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.333413 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-config-data\") pod \"564639bd-5984-4822-80a8-c88dd5ae22da\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.333472 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-combined-ca-bundle\") pod \"564639bd-5984-4822-80a8-c88dd5ae22da\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.333572 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564639bd-5984-4822-80a8-c88dd5ae22da-logs\") pod \"564639bd-5984-4822-80a8-c88dd5ae22da\" (UID: \"564639bd-5984-4822-80a8-c88dd5ae22da\") " Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.334203 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b405de5-9885-473d-acc2-e974d5fcdcdf-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.334253 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.334269 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qdwb\" (UniqueName: \"kubernetes.io/projected/3b405de5-9885-473d-acc2-e974d5fcdcdf-kube-api-access-5qdwb\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.334281 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b405de5-9885-473d-acc2-e974d5fcdcdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.334985 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564639bd-5984-4822-80a8-c88dd5ae22da-logs" (OuterVolumeSpecName: "logs") pod "564639bd-5984-4822-80a8-c88dd5ae22da" (UID: "564639bd-5984-4822-80a8-c88dd5ae22da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.338809 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564639bd-5984-4822-80a8-c88dd5ae22da-kube-api-access-42vr8" (OuterVolumeSpecName: "kube-api-access-42vr8") pod "564639bd-5984-4822-80a8-c88dd5ae22da" (UID: "564639bd-5984-4822-80a8-c88dd5ae22da"). InnerVolumeSpecName "kube-api-access-42vr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.358107 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564639bd-5984-4822-80a8-c88dd5ae22da" (UID: "564639bd-5984-4822-80a8-c88dd5ae22da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.378455 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-config-data" (OuterVolumeSpecName: "config-data") pod "564639bd-5984-4822-80a8-c88dd5ae22da" (UID: "564639bd-5984-4822-80a8-c88dd5ae22da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.392102 4931 generic.go:334] "Generic (PLEG): container finished" podID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerID="0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794" exitCode=0 Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.393286 4931 generic.go:334] "Generic (PLEG): container finished" podID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerID="02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2" exitCode=143 Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.392354 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.392264 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b405de5-9885-473d-acc2-e974d5fcdcdf","Type":"ContainerDied","Data":"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.394053 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b405de5-9885-473d-acc2-e974d5fcdcdf","Type":"ContainerDied","Data":"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.394076 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3b405de5-9885-473d-acc2-e974d5fcdcdf","Type":"ContainerDied","Data":"8e2f7fd7e3b97e352d13a85c9cb339f64f9b7aade8260e679a2c93c61eeeff04"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.394097 4931 scope.go:117] "RemoveContainer" containerID="0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.396768 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c9b04495-2e29-4188-adbe-e6ed3669c25a","Type":"ContainerStarted","Data":"7956c67be4873a213d6ce531a234902ab8e420fef8d81bd2f9cc50a55b2ed19e"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.396803 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c9b04495-2e29-4188-adbe-e6ed3669c25a","Type":"ContainerStarted","Data":"c53e952c29d0f7bb7753df2ecd373b270a2a034437bebd33a1a8707e3ab33ea8"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.397199 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.404235 4931 generic.go:334] "Generic (PLEG): container finished" podID="564639bd-5984-4822-80a8-c88dd5ae22da" containerID="1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3" exitCode=0 Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.404453 4931 generic.go:334] "Generic (PLEG): container finished" podID="564639bd-5984-4822-80a8-c88dd5ae22da" containerID="cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95" exitCode=143 Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.404669 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"564639bd-5984-4822-80a8-c88dd5ae22da","Type":"ContainerDied","Data":"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.404828 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"564639bd-5984-4822-80a8-c88dd5ae22da","Type":"ContainerDied","Data":"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.404965 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"564639bd-5984-4822-80a8-c88dd5ae22da","Type":"ContainerDied","Data":"12fd945e445bdfd59f114b8ea4c531fff45ade762e92da96058d89f075ac3f03"} Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.405359 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.428512 4931 scope.go:117] "RemoveContainer" containerID="02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.443152 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42vr8\" (UniqueName: \"kubernetes.io/projected/564639bd-5984-4822-80a8-c88dd5ae22da-kube-api-access-42vr8\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.443505 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.443593 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564639bd-5984-4822-80a8-c88dd5ae22da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.443693 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564639bd-5984-4822-80a8-c88dd5ae22da-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.450639 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.450608549 podStartE2EDuration="2.450608549s" podCreationTimestamp="2026-01-30 06:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:47.426947357 +0000 UTC m=+5462.796857614" watchObservedRunningTime="2026-01-30 06:38:47.450608549 +0000 UTC m=+5462.820518846" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.473398 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.516064 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.519666 4931 scope.go:117] "RemoveContainer" containerID="0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794" Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.520632 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794\": container with ID starting with 0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794 not found: ID does not exist" containerID="0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.520664 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794"} err="failed to get container status \"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794\": rpc error: code = NotFound desc = could not find container \"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794\": container with ID starting with 0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.520685 4931 scope.go:117] "RemoveContainer" containerID="02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2" Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.520907 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2\": container with ID starting with 02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2 not found: ID does not exist" containerID="02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.521579 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2"} err="failed to get container status \"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2\": rpc error: code = NotFound desc = could not find container \"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2\": container with ID starting with 02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.521608 4931 scope.go:117] "RemoveContainer" containerID="0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.521924 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794"} err="failed to get container status \"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794\": rpc error: code = NotFound desc = could not find container \"0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794\": container with ID starting with 0f837ad88ec9472bf685a2a4e4bca226406e5e671841748e97cd2f939ad6d794 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.521979 4931 scope.go:117] "RemoveContainer" containerID="02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.522397 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2"} err="failed to get container status \"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2\": rpc error: code = NotFound desc = could not find container \"02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2\": container with ID starting with 02ba2014b1566d696849ad8dcc9456620c92cebec9969725ae530ca735245ce2 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.522439 4931 scope.go:117] "RemoveContainer" containerID="1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.524777 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.525170 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-log" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525181 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-log" Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.525200 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-api" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525206 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-api" Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.525215 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-metadata" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525221 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-metadata" Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.525234 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-log" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525239 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-log" Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.525254 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f484f87-1747-491b-a6c5-dd1d51ff66af" containerName="nova-manage" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525259 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f484f87-1747-491b-a6c5-dd1d51ff66af" containerName="nova-manage" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525402 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-log" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525429 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f484f87-1747-491b-a6c5-dd1d51ff66af" containerName="nova-manage" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525443 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" containerName="nova-api-api" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525454 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-metadata" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.525465 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" containerName="nova-metadata-log" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.526715 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.532672 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.538530 4931 scope.go:117] "RemoveContainer" containerID="cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.544698 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.552439 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.559515 4931 scope.go:117] "RemoveContainer" containerID="1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.560490 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.562753 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3\": container with ID starting with 1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3 not found: ID does not exist" containerID="1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.562788 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3"} err="failed to get container status \"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3\": rpc error: code = NotFound desc = could not find container \"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3\": container with ID starting with 1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.562817 4931 scope.go:117] "RemoveContainer" containerID="cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95" Jan 30 06:38:47 crc kubenswrapper[4931]: E0130 06:38:47.563149 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95\": container with ID starting with cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95 not found: ID does not exist" containerID="cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.563186 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95"} err="failed to get container status \"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95\": rpc error: code = NotFound desc = could not find container \"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95\": container with ID starting with cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.563211 4931 scope.go:117] "RemoveContainer" containerID="1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.563567 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3"} err="failed to get container status \"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3\": rpc error: code = NotFound desc = could not find container \"1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3\": container with ID starting with 1df264c72c48138559ebff03be7bd1d563bc44130837ce7ac5ce396f96609ca3 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.563585 4931 scope.go:117] "RemoveContainer" containerID="cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.563829 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95"} err="failed to get container status \"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95\": rpc error: code = NotFound desc = could not find container \"cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95\": container with ID starting with cae1c4faeb2daaa7074a7b00fa2d92090fd657c7c771ec3ca9a114e70a76bb95 not found: ID does not exist" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.568242 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.569684 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.571732 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.576676 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.647977 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.648093 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9psj\" (UniqueName: \"kubernetes.io/projected/dd921adf-3fc0-4727-bf34-17203123e432-kube-api-access-z9psj\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.648271 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpc9m\" (UniqueName: \"kubernetes.io/projected/7144d1e4-cf7f-4cd9-891c-02bf466f894f-kube-api-access-mpc9m\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.648327 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.648362 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-config-data\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.648480 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-config-data\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.648536 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd921adf-3fc0-4727-bf34-17203123e432-logs\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.648573 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7144d1e4-cf7f-4cd9-891c-02bf466f894f-logs\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.749753 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-config-data\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.749813 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd921adf-3fc0-4727-bf34-17203123e432-logs\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.749838 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7144d1e4-cf7f-4cd9-891c-02bf466f894f-logs\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.749892 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.749924 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9psj\" (UniqueName: \"kubernetes.io/projected/dd921adf-3fc0-4727-bf34-17203123e432-kube-api-access-z9psj\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.750010 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpc9m\" (UniqueName: \"kubernetes.io/projected/7144d1e4-cf7f-4cd9-891c-02bf466f894f-kube-api-access-mpc9m\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.750032 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.750049 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-config-data\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.751588 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd921adf-3fc0-4727-bf34-17203123e432-logs\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.754884 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-config-data\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.768317 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7144d1e4-cf7f-4cd9-891c-02bf466f894f-logs\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.769049 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.769303 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-config-data\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.769727 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.772349 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9psj\" (UniqueName: \"kubernetes.io/projected/dd921adf-3fc0-4727-bf34-17203123e432-kube-api-access-z9psj\") pod \"nova-metadata-0\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.773001 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpc9m\" (UniqueName: \"kubernetes.io/projected/7144d1e4-cf7f-4cd9-891c-02bf466f894f-kube-api-access-mpc9m\") pod \"nova-api-0\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " pod="openstack/nova-api-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.848910 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:38:47 crc kubenswrapper[4931]: I0130 06:38:47.884112 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.271581 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:38:48 crc kubenswrapper[4931]: W0130 06:38:48.276978 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7144d1e4_cf7f_4cd9_891c_02bf466f894f.slice/crio-d8da632fe1a21ba627e5a93e9f2262e371b5481c6d1d50e41210e4e09d1d78c5 WatchSource:0}: Error finding container d8da632fe1a21ba627e5a93e9f2262e371b5481c6d1d50e41210e4e09d1d78c5: Status 404 returned error can't find the container with id d8da632fe1a21ba627e5a93e9f2262e371b5481c6d1d50e41210e4e09d1d78c5 Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.360507 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:38:48 crc kubenswrapper[4931]: W0130 06:38:48.375297 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd921adf_3fc0_4727_bf34_17203123e432.slice/crio-3427a754aa9dbc9c068a50beefaa44100c472db7b66af28a59aa83acea56ca82 WatchSource:0}: Error finding container 3427a754aa9dbc9c068a50beefaa44100c472db7b66af28a59aa83acea56ca82: Status 404 returned error can't find the container with id 3427a754aa9dbc9c068a50beefaa44100c472db7b66af28a59aa83acea56ca82 Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.419524 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7144d1e4-cf7f-4cd9-891c-02bf466f894f","Type":"ContainerStarted","Data":"d8da632fe1a21ba627e5a93e9f2262e371b5481c6d1d50e41210e4e09d1d78c5"} Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.421673 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd921adf-3fc0-4727-bf34-17203123e432","Type":"ContainerStarted","Data":"3427a754aa9dbc9c068a50beefaa44100c472db7b66af28a59aa83acea56ca82"} Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.825264 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.839102 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.909695 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.972885 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf9d65499-j99dc"] Jan 30 06:38:48 crc kubenswrapper[4931]: I0130 06:38:48.973109 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" podUID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerName="dnsmasq-dns" containerID="cri-o://2e8dbb3daa04d0b7ca86823625c46d0d78b12efba028999fcdcceb4252172d0f" gracePeriod=10 Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.436231 4931 generic.go:334] "Generic (PLEG): container finished" podID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerID="2e8dbb3daa04d0b7ca86823625c46d0d78b12efba028999fcdcceb4252172d0f" exitCode=0 Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.442542 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b405de5-9885-473d-acc2-e974d5fcdcdf" path="/var/lib/kubelet/pods/3b405de5-9885-473d-acc2-e974d5fcdcdf/volumes" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.443168 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564639bd-5984-4822-80a8-c88dd5ae22da" path="/var/lib/kubelet/pods/564639bd-5984-4822-80a8-c88dd5ae22da/volumes" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.443787 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" event={"ID":"94c3e877-4729-4777-9460-2fdce31b2bc3","Type":"ContainerDied","Data":"2e8dbb3daa04d0b7ca86823625c46d0d78b12efba028999fcdcceb4252172d0f"} Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.466557 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7144d1e4-cf7f-4cd9-891c-02bf466f894f","Type":"ContainerStarted","Data":"99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed"} Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.466600 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7144d1e4-cf7f-4cd9-891c-02bf466f894f","Type":"ContainerStarted","Data":"587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8"} Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.470301 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd921adf-3fc0-4727-bf34-17203123e432","Type":"ContainerStarted","Data":"c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5"} Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.470333 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd921adf-3fc0-4727-bf34-17203123e432","Type":"ContainerStarted","Data":"93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb"} Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.487721 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.507653 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5076304350000003 podStartE2EDuration="2.507630435s" podCreationTimestamp="2026-01-30 06:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:49.491464883 +0000 UTC m=+5464.861375160" watchObservedRunningTime="2026-01-30 06:38:49.507630435 +0000 UTC m=+5464.877540692" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.525768 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.538555 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.53853694 podStartE2EDuration="2.53853694s" podCreationTimestamp="2026-01-30 06:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:49.518749017 +0000 UTC m=+5464.888659274" watchObservedRunningTime="2026-01-30 06:38:49.53853694 +0000 UTC m=+5464.908447197" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.705092 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-sb\") pod \"94c3e877-4729-4777-9460-2fdce31b2bc3\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.705144 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrwmf\" (UniqueName: \"kubernetes.io/projected/94c3e877-4729-4777-9460-2fdce31b2bc3-kube-api-access-qrwmf\") pod \"94c3e877-4729-4777-9460-2fdce31b2bc3\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.705165 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-dns-svc\") pod \"94c3e877-4729-4777-9460-2fdce31b2bc3\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.705367 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-config\") pod \"94c3e877-4729-4777-9460-2fdce31b2bc3\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.705873 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-nb\") pod \"94c3e877-4729-4777-9460-2fdce31b2bc3\" (UID: \"94c3e877-4729-4777-9460-2fdce31b2bc3\") " Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.714188 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c3e877-4729-4777-9460-2fdce31b2bc3-kube-api-access-qrwmf" (OuterVolumeSpecName: "kube-api-access-qrwmf") pod "94c3e877-4729-4777-9460-2fdce31b2bc3" (UID: "94c3e877-4729-4777-9460-2fdce31b2bc3"). InnerVolumeSpecName "kube-api-access-qrwmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.747588 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94c3e877-4729-4777-9460-2fdce31b2bc3" (UID: "94c3e877-4729-4777-9460-2fdce31b2bc3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.748406 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94c3e877-4729-4777-9460-2fdce31b2bc3" (UID: "94c3e877-4729-4777-9460-2fdce31b2bc3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.748818 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94c3e877-4729-4777-9460-2fdce31b2bc3" (UID: "94c3e877-4729-4777-9460-2fdce31b2bc3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.762382 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-config" (OuterVolumeSpecName: "config") pod "94c3e877-4729-4777-9460-2fdce31b2bc3" (UID: "94c3e877-4729-4777-9460-2fdce31b2bc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.808138 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.808167 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.808179 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.808189 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrwmf\" (UniqueName: \"kubernetes.io/projected/94c3e877-4729-4777-9460-2fdce31b2bc3-kube-api-access-qrwmf\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:49 crc kubenswrapper[4931]: I0130 06:38:49.808199 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94c3e877-4729-4777-9460-2fdce31b2bc3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.481338 4931 generic.go:334] "Generic (PLEG): container finished" podID="9399bfc6-7083-4978-b49d-bc46769c2b9e" containerID="813a4c2198bcec0f1cab1d8053ab7437050231fbc0dad71c25c447ce377f8fcf" exitCode=0 Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.481469 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9399bfc6-7083-4978-b49d-bc46769c2b9e","Type":"ContainerDied","Data":"813a4c2198bcec0f1cab1d8053ab7437050231fbc0dad71c25c447ce377f8fcf"} Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.484182 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" event={"ID":"94c3e877-4729-4777-9460-2fdce31b2bc3","Type":"ContainerDied","Data":"cbfdef9dfaed32413bc6b3a12689ef221574794a4832dba0bff0f3e04d140623"} Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.484261 4931 scope.go:117] "RemoveContainer" containerID="2e8dbb3daa04d0b7ca86823625c46d0d78b12efba028999fcdcceb4252172d0f" Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.484491 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf9d65499-j99dc" Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.527511 4931 scope.go:117] "RemoveContainer" containerID="d3534b4266cad4f8c042a5d1a723852bc5684f3d2cae49aae0ea01f2e1276ee4" Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.529579 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf9d65499-j99dc"] Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.545014 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bf9d65499-j99dc"] Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.762987 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.938974 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-combined-ca-bundle\") pod \"9399bfc6-7083-4978-b49d-bc46769c2b9e\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.939566 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-config-data\") pod \"9399bfc6-7083-4978-b49d-bc46769c2b9e\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.939693 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzszf\" (UniqueName: \"kubernetes.io/projected/9399bfc6-7083-4978-b49d-bc46769c2b9e-kube-api-access-gzszf\") pod \"9399bfc6-7083-4978-b49d-bc46769c2b9e\" (UID: \"9399bfc6-7083-4978-b49d-bc46769c2b9e\") " Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.949986 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9399bfc6-7083-4978-b49d-bc46769c2b9e-kube-api-access-gzszf" (OuterVolumeSpecName: "kube-api-access-gzszf") pod "9399bfc6-7083-4978-b49d-bc46769c2b9e" (UID: "9399bfc6-7083-4978-b49d-bc46769c2b9e"). InnerVolumeSpecName "kube-api-access-gzszf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.987631 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9399bfc6-7083-4978-b49d-bc46769c2b9e" (UID: "9399bfc6-7083-4978-b49d-bc46769c2b9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:50 crc kubenswrapper[4931]: I0130 06:38:50.988364 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-config-data" (OuterVolumeSpecName: "config-data") pod "9399bfc6-7083-4978-b49d-bc46769c2b9e" (UID: "9399bfc6-7083-4978-b49d-bc46769c2b9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.044081 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.044157 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzszf\" (UniqueName: \"kubernetes.io/projected/9399bfc6-7083-4978-b49d-bc46769c2b9e-kube-api-access-gzszf\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.044187 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9399bfc6-7083-4978-b49d-bc46769c2b9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.441564 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c3e877-4729-4777-9460-2fdce31b2bc3" path="/var/lib/kubelet/pods/94c3e877-4729-4777-9460-2fdce31b2bc3/volumes" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.520618 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9399bfc6-7083-4978-b49d-bc46769c2b9e","Type":"ContainerDied","Data":"a4fd2130796fb3af0e7960b4f61e0b3a284a488489630a1d386b0d9487a9d9c8"} Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.520699 4931 scope.go:117] "RemoveContainer" containerID="813a4c2198bcec0f1cab1d8053ab7437050231fbc0dad71c25c447ce377f8fcf" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.520745 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.582880 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.612234 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.625295 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:51 crc kubenswrapper[4931]: E0130 06:38:51.625800 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerName="dnsmasq-dns" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.625824 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerName="dnsmasq-dns" Jan 30 06:38:51 crc kubenswrapper[4931]: E0130 06:38:51.625844 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerName="init" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.625853 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerName="init" Jan 30 06:38:51 crc kubenswrapper[4931]: E0130 06:38:51.625889 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9399bfc6-7083-4978-b49d-bc46769c2b9e" containerName="nova-scheduler-scheduler" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.625899 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="9399bfc6-7083-4978-b49d-bc46769c2b9e" containerName="nova-scheduler-scheduler" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.626117 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="9399bfc6-7083-4978-b49d-bc46769c2b9e" containerName="nova-scheduler-scheduler" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.626149 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c3e877-4729-4777-9460-2fdce31b2bc3" containerName="dnsmasq-dns" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.626915 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.629599 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.636166 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.661400 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-config-data\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.661568 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.661658 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khfr\" (UniqueName: \"kubernetes.io/projected/ee55169a-5fa4-4ad5-b765-41685339650c-kube-api-access-6khfr\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.763647 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.763764 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6khfr\" (UniqueName: \"kubernetes.io/projected/ee55169a-5fa4-4ad5-b765-41685339650c-kube-api-access-6khfr\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.763811 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-config-data\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.767539 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.769402 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-config-data\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.785504 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6khfr\" (UniqueName: \"kubernetes.io/projected/ee55169a-5fa4-4ad5-b765-41685339650c-kube-api-access-6khfr\") pod \"nova-scheduler-0\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " pod="openstack/nova-scheduler-0" Jan 30 06:38:51 crc kubenswrapper[4931]: I0130 06:38:51.950939 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:38:52 crc kubenswrapper[4931]: I0130 06:38:52.464453 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:38:52 crc kubenswrapper[4931]: W0130 06:38:52.467627 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee55169a_5fa4_4ad5_b765_41685339650c.slice/crio-c43be8ffe96dc67c1c71579ad5ffebd7fcb89e53c15d4d3b57a62c61c04d179d WatchSource:0}: Error finding container c43be8ffe96dc67c1c71579ad5ffebd7fcb89e53c15d4d3b57a62c61c04d179d: Status 404 returned error can't find the container with id c43be8ffe96dc67c1c71579ad5ffebd7fcb89e53c15d4d3b57a62c61c04d179d Jan 30 06:38:52 crc kubenswrapper[4931]: I0130 06:38:52.538229 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee55169a-5fa4-4ad5-b765-41685339650c","Type":"ContainerStarted","Data":"c43be8ffe96dc67c1c71579ad5ffebd7fcb89e53c15d4d3b57a62c61c04d179d"} Jan 30 06:38:52 crc kubenswrapper[4931]: I0130 06:38:52.849968 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:38:52 crc kubenswrapper[4931]: I0130 06:38:52.850024 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:38:53 crc kubenswrapper[4931]: I0130 06:38:53.442841 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9399bfc6-7083-4978-b49d-bc46769c2b9e" path="/var/lib/kubelet/pods/9399bfc6-7083-4978-b49d-bc46769c2b9e/volumes" Jan 30 06:38:53 crc kubenswrapper[4931]: I0130 06:38:53.553252 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee55169a-5fa4-4ad5-b765-41685339650c","Type":"ContainerStarted","Data":"0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8"} Jan 30 06:38:53 crc kubenswrapper[4931]: I0130 06:38:53.581698 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.581672871 podStartE2EDuration="2.581672871s" podCreationTimestamp="2026-01-30 06:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:53.571117586 +0000 UTC m=+5468.941027873" watchObservedRunningTime="2026-01-30 06:38:53.581672871 +0000 UTC m=+5468.951583158" Jan 30 06:38:54 crc kubenswrapper[4931]: I0130 06:38:54.519872 4931 scope.go:117] "RemoveContainer" containerID="eb43994040c586b7da805891db0738dbfdbd3eca5875691d38856aeb8f3f02e2" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.350784 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.864212 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xqzzz"] Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.865993 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.870332 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.870796 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.876696 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xqzzz"] Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.951648 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.971120 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.971179 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zws7k\" (UniqueName: \"kubernetes.io/projected/d80061de-8d87-4c58-8733-26c5224bf03a-kube-api-access-zws7k\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.971235 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-config-data\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:56 crc kubenswrapper[4931]: I0130 06:38:56.971262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-scripts\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.073220 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-config-data\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.073608 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-scripts\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.073891 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.074111 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zws7k\" (UniqueName: \"kubernetes.io/projected/d80061de-8d87-4c58-8733-26c5224bf03a-kube-api-access-zws7k\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.078268 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.086180 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-scripts\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.094809 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zws7k\" (UniqueName: \"kubernetes.io/projected/d80061de-8d87-4c58-8733-26c5224bf03a-kube-api-access-zws7k\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.098833 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-config-data\") pod \"nova-cell1-cell-mapping-xqzzz\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.213610 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.363038 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.363291 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.363350 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.363933 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7acc7dd93c5206c6ff8ebf3271a041083dbd9bcf0e00cc88a42d6c0b4c7429dd"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.363982 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://7acc7dd93c5206c6ff8ebf3271a041083dbd9bcf0e00cc88a42d6c0b4c7429dd" gracePeriod=600 Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.603930 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="7acc7dd93c5206c6ff8ebf3271a041083dbd9bcf0e00cc88a42d6c0b4c7429dd" exitCode=0 Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.603985 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"7acc7dd93c5206c6ff8ebf3271a041083dbd9bcf0e00cc88a42d6c0b4c7429dd"} Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.604031 4931 scope.go:117] "RemoveContainer" containerID="a15ec04892f67aace9be1080c17614ea03774513f4a10f801ecc9b778518f57b" Jan 30 06:38:57 crc kubenswrapper[4931]: W0130 06:38:57.717675 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd80061de_8d87_4c58_8733_26c5224bf03a.slice/crio-4bbab414c5eeb76108044d3c6b386d270ce280324b5ec5da58d6e7fbd35f561d WatchSource:0}: Error finding container 4bbab414c5eeb76108044d3c6b386d270ce280324b5ec5da58d6e7fbd35f561d: Status 404 returned error can't find the container with id 4bbab414c5eeb76108044d3c6b386d270ce280324b5ec5da58d6e7fbd35f561d Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.728189 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xqzzz"] Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.850275 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.850323 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.884790 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 06:38:57 crc kubenswrapper[4931]: I0130 06:38:57.886458 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 06:38:58 crc kubenswrapper[4931]: I0130 06:38:58.615087 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xqzzz" event={"ID":"d80061de-8d87-4c58-8733-26c5224bf03a","Type":"ContainerStarted","Data":"ad91a28e445938a8582000a48ddcd232576020a5c15a4a29af6e45aaf8531507"} Jan 30 06:38:58 crc kubenswrapper[4931]: I0130 06:38:58.615545 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xqzzz" event={"ID":"d80061de-8d87-4c58-8733-26c5224bf03a","Type":"ContainerStarted","Data":"4bbab414c5eeb76108044d3c6b386d270ce280324b5ec5da58d6e7fbd35f561d"} Jan 30 06:38:58 crc kubenswrapper[4931]: I0130 06:38:58.617845 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7"} Jan 30 06:38:58 crc kubenswrapper[4931]: I0130 06:38:58.648550 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xqzzz" podStartSLOduration=2.648525491 podStartE2EDuration="2.648525491s" podCreationTimestamp="2026-01-30 06:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:38:58.640037354 +0000 UTC m=+5474.009947621" watchObservedRunningTime="2026-01-30 06:38:58.648525491 +0000 UTC m=+5474.018435758" Jan 30 06:38:59 crc kubenswrapper[4931]: I0130 06:38:59.014674 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.71:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:38:59 crc kubenswrapper[4931]: I0130 06:38:59.014730 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.71:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:38:59 crc kubenswrapper[4931]: I0130 06:38:59.014714 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:38:59 crc kubenswrapper[4931]: I0130 06:38:59.014674 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.70:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.140245 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47vng"] Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.143267 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.162719 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47vng"] Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.248467 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q779l\" (UniqueName: \"kubernetes.io/projected/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-kube-api-access-q779l\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.248510 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-catalog-content\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.248558 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-utilities\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.350077 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q779l\" (UniqueName: \"kubernetes.io/projected/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-kube-api-access-q779l\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.350121 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-catalog-content\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.350169 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-utilities\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.350638 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-utilities\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.350732 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-catalog-content\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.388840 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q779l\" (UniqueName: \"kubernetes.io/projected/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-kube-api-access-q779l\") pod \"community-operators-47vng\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.477077 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.951142 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 06:39:01 crc kubenswrapper[4931]: I0130 06:39:01.981336 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.028622 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47vng"] Jan 30 06:39:02 crc kubenswrapper[4931]: W0130 06:39:02.033603 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc3d76da_828d_4a79_8f4f_aa9003c7eb85.slice/crio-254d4413486427e0757d7276f1aefd28a7034ac0768f441833b69cb8156807be WatchSource:0}: Error finding container 254d4413486427e0757d7276f1aefd28a7034ac0768f441833b69cb8156807be: Status 404 returned error can't find the container with id 254d4413486427e0757d7276f1aefd28a7034ac0768f441833b69cb8156807be Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.657155 4931 generic.go:334] "Generic (PLEG): container finished" podID="d80061de-8d87-4c58-8733-26c5224bf03a" containerID="ad91a28e445938a8582000a48ddcd232576020a5c15a4a29af6e45aaf8531507" exitCode=0 Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.657234 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xqzzz" event={"ID":"d80061de-8d87-4c58-8733-26c5224bf03a","Type":"ContainerDied","Data":"ad91a28e445938a8582000a48ddcd232576020a5c15a4a29af6e45aaf8531507"} Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.661007 4931 generic.go:334] "Generic (PLEG): container finished" podID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerID="a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b" exitCode=0 Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.661068 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vng" event={"ID":"cc3d76da-828d-4a79-8f4f-aa9003c7eb85","Type":"ContainerDied","Data":"a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b"} Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.661113 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vng" event={"ID":"cc3d76da-828d-4a79-8f4f-aa9003c7eb85","Type":"ContainerStarted","Data":"254d4413486427e0757d7276f1aefd28a7034ac0768f441833b69cb8156807be"} Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.664623 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:39:02 crc kubenswrapper[4931]: I0130 06:39:02.709220 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 06:39:03 crc kubenswrapper[4931]: I0130 06:39:03.675281 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vng" event={"ID":"cc3d76da-828d-4a79-8f4f-aa9003c7eb85","Type":"ContainerStarted","Data":"ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a"} Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.186923 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.321947 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zws7k\" (UniqueName: \"kubernetes.io/projected/d80061de-8d87-4c58-8733-26c5224bf03a-kube-api-access-zws7k\") pod \"d80061de-8d87-4c58-8733-26c5224bf03a\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.322171 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-combined-ca-bundle\") pod \"d80061de-8d87-4c58-8733-26c5224bf03a\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.322274 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-config-data\") pod \"d80061de-8d87-4c58-8733-26c5224bf03a\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.322299 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-scripts\") pod \"d80061de-8d87-4c58-8733-26c5224bf03a\" (UID: \"d80061de-8d87-4c58-8733-26c5224bf03a\") " Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.328531 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-scripts" (OuterVolumeSpecName: "scripts") pod "d80061de-8d87-4c58-8733-26c5224bf03a" (UID: "d80061de-8d87-4c58-8733-26c5224bf03a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.333533 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d80061de-8d87-4c58-8733-26c5224bf03a-kube-api-access-zws7k" (OuterVolumeSpecName: "kube-api-access-zws7k") pod "d80061de-8d87-4c58-8733-26c5224bf03a" (UID: "d80061de-8d87-4c58-8733-26c5224bf03a"). InnerVolumeSpecName "kube-api-access-zws7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.346155 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-config-data" (OuterVolumeSpecName: "config-data") pod "d80061de-8d87-4c58-8733-26c5224bf03a" (UID: "d80061de-8d87-4c58-8733-26c5224bf03a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.353017 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d80061de-8d87-4c58-8733-26c5224bf03a" (UID: "d80061de-8d87-4c58-8733-26c5224bf03a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.425239 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.425522 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.425682 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zws7k\" (UniqueName: \"kubernetes.io/projected/d80061de-8d87-4c58-8733-26c5224bf03a-kube-api-access-zws7k\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.425814 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80061de-8d87-4c58-8733-26c5224bf03a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.689753 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xqzzz" event={"ID":"d80061de-8d87-4c58-8733-26c5224bf03a","Type":"ContainerDied","Data":"4bbab414c5eeb76108044d3c6b386d270ce280324b5ec5da58d6e7fbd35f561d"} Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.689815 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bbab414c5eeb76108044d3c6b386d270ce280324b5ec5da58d6e7fbd35f561d" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.689909 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xqzzz" Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.699989 4931 generic.go:334] "Generic (PLEG): container finished" podID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerID="ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a" exitCode=0 Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.700091 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vng" event={"ID":"cc3d76da-828d-4a79-8f4f-aa9003c7eb85","Type":"ContainerDied","Data":"ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a"} Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.872079 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.872783 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-api" containerID="cri-o://99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed" gracePeriod=30 Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.873361 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-log" containerID="cri-o://587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8" gracePeriod=30 Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.943557 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.943798 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ee55169a-5fa4-4ad5-b765-41685339650c" containerName="nova-scheduler-scheduler" containerID="cri-o://0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" gracePeriod=30 Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.959867 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.960121 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-log" containerID="cri-o://93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb" gracePeriod=30 Jan 30 06:39:04 crc kubenswrapper[4931]: I0130 06:39:04.960271 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-metadata" containerID="cri-o://c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5" gracePeriod=30 Jan 30 06:39:05 crc kubenswrapper[4931]: E0130 06:39:05.029033 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7144d1e4_cf7f_4cd9_891c_02bf466f894f.slice/crio-587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8.scope\": RecentStats: unable to find data in memory cache]" Jan 30 06:39:05 crc kubenswrapper[4931]: I0130 06:39:05.713713 4931 generic.go:334] "Generic (PLEG): container finished" podID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerID="587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8" exitCode=143 Jan 30 06:39:05 crc kubenswrapper[4931]: I0130 06:39:05.713786 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7144d1e4-cf7f-4cd9-891c-02bf466f894f","Type":"ContainerDied","Data":"587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8"} Jan 30 06:39:05 crc kubenswrapper[4931]: I0130 06:39:05.716360 4931 generic.go:334] "Generic (PLEG): container finished" podID="dd921adf-3fc0-4727-bf34-17203123e432" containerID="93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb" exitCode=143 Jan 30 06:39:05 crc kubenswrapper[4931]: I0130 06:39:05.716450 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd921adf-3fc0-4727-bf34-17203123e432","Type":"ContainerDied","Data":"93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb"} Jan 30 06:39:05 crc kubenswrapper[4931]: I0130 06:39:05.718688 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vng" event={"ID":"cc3d76da-828d-4a79-8f4f-aa9003c7eb85","Type":"ContainerStarted","Data":"c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1"} Jan 30 06:39:05 crc kubenswrapper[4931]: I0130 06:39:05.748378 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47vng" podStartSLOduration=2.322250964 podStartE2EDuration="4.748356576s" podCreationTimestamp="2026-01-30 06:39:01 +0000 UTC" firstStartedPulling="2026-01-30 06:39:02.663406312 +0000 UTC m=+5478.033316569" lastFinishedPulling="2026-01-30 06:39:05.089511924 +0000 UTC m=+5480.459422181" observedRunningTime="2026-01-30 06:39:05.737005529 +0000 UTC m=+5481.106915796" watchObservedRunningTime="2026-01-30 06:39:05.748356576 +0000 UTC m=+5481.118266853" Jan 30 06:39:06 crc kubenswrapper[4931]: E0130 06:39:06.956854 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 06:39:06 crc kubenswrapper[4931]: E0130 06:39:06.958857 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 06:39:06 crc kubenswrapper[4931]: E0130 06:39:06.960780 4931 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 06:39:06 crc kubenswrapper[4931]: E0130 06:39:06.960846 4931 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ee55169a-5fa4-4ad5-b765-41685339650c" containerName="nova-scheduler-scheduler" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.554894 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.563266 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.606890 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7144d1e4-cf7f-4cd9-891c-02bf466f894f-logs\") pod \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.606934 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-config-data\") pod \"dd921adf-3fc0-4727-bf34-17203123e432\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.606974 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd921adf-3fc0-4727-bf34-17203123e432-logs\") pod \"dd921adf-3fc0-4727-bf34-17203123e432\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.607000 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpc9m\" (UniqueName: \"kubernetes.io/projected/7144d1e4-cf7f-4cd9-891c-02bf466f894f-kube-api-access-mpc9m\") pod \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.607050 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-combined-ca-bundle\") pod \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.607089 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-config-data\") pod \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\" (UID: \"7144d1e4-cf7f-4cd9-891c-02bf466f894f\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.607151 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-combined-ca-bundle\") pod \"dd921adf-3fc0-4727-bf34-17203123e432\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.607226 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9psj\" (UniqueName: \"kubernetes.io/projected/dd921adf-3fc0-4727-bf34-17203123e432-kube-api-access-z9psj\") pod \"dd921adf-3fc0-4727-bf34-17203123e432\" (UID: \"dd921adf-3fc0-4727-bf34-17203123e432\") " Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.607894 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7144d1e4-cf7f-4cd9-891c-02bf466f894f-logs" (OuterVolumeSpecName: "logs") pod "7144d1e4-cf7f-4cd9-891c-02bf466f894f" (UID: "7144d1e4-cf7f-4cd9-891c-02bf466f894f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.608001 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd921adf-3fc0-4727-bf34-17203123e432-logs" (OuterVolumeSpecName: "logs") pod "dd921adf-3fc0-4727-bf34-17203123e432" (UID: "dd921adf-3fc0-4727-bf34-17203123e432"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.608453 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7144d1e4-cf7f-4cd9-891c-02bf466f894f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.608472 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd921adf-3fc0-4727-bf34-17203123e432-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.612771 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd921adf-3fc0-4727-bf34-17203123e432-kube-api-access-z9psj" (OuterVolumeSpecName: "kube-api-access-z9psj") pod "dd921adf-3fc0-4727-bf34-17203123e432" (UID: "dd921adf-3fc0-4727-bf34-17203123e432"). InnerVolumeSpecName "kube-api-access-z9psj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.613089 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7144d1e4-cf7f-4cd9-891c-02bf466f894f-kube-api-access-mpc9m" (OuterVolumeSpecName: "kube-api-access-mpc9m") pod "7144d1e4-cf7f-4cd9-891c-02bf466f894f" (UID: "7144d1e4-cf7f-4cd9-891c-02bf466f894f"). InnerVolumeSpecName "kube-api-access-mpc9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.631155 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-config-data" (OuterVolumeSpecName: "config-data") pod "7144d1e4-cf7f-4cd9-891c-02bf466f894f" (UID: "7144d1e4-cf7f-4cd9-891c-02bf466f894f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.632022 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-config-data" (OuterVolumeSpecName: "config-data") pod "dd921adf-3fc0-4727-bf34-17203123e432" (UID: "dd921adf-3fc0-4727-bf34-17203123e432"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.634593 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7144d1e4-cf7f-4cd9-891c-02bf466f894f" (UID: "7144d1e4-cf7f-4cd9-891c-02bf466f894f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.641200 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd921adf-3fc0-4727-bf34-17203123e432" (UID: "dd921adf-3fc0-4727-bf34-17203123e432"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.710033 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9psj\" (UniqueName: \"kubernetes.io/projected/dd921adf-3fc0-4727-bf34-17203123e432-kube-api-access-z9psj\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.710305 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.710372 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpc9m\" (UniqueName: \"kubernetes.io/projected/7144d1e4-cf7f-4cd9-891c-02bf466f894f-kube-api-access-mpc9m\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.710450 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.710513 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7144d1e4-cf7f-4cd9-891c-02bf466f894f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.710598 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd921adf-3fc0-4727-bf34-17203123e432-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.756168 4931 generic.go:334] "Generic (PLEG): container finished" podID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerID="99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed" exitCode=0 Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.756213 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7144d1e4-cf7f-4cd9-891c-02bf466f894f","Type":"ContainerDied","Data":"99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed"} Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.756257 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.756566 4931 scope.go:117] "RemoveContainer" containerID="99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.756552 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7144d1e4-cf7f-4cd9-891c-02bf466f894f","Type":"ContainerDied","Data":"d8da632fe1a21ba627e5a93e9f2262e371b5481c6d1d50e41210e4e09d1d78c5"} Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.760270 4931 generic.go:334] "Generic (PLEG): container finished" podID="dd921adf-3fc0-4727-bf34-17203123e432" containerID="c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5" exitCode=0 Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.760315 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.760347 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd921adf-3fc0-4727-bf34-17203123e432","Type":"ContainerDied","Data":"c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5"} Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.760733 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dd921adf-3fc0-4727-bf34-17203123e432","Type":"ContainerDied","Data":"3427a754aa9dbc9c068a50beefaa44100c472db7b66af28a59aa83acea56ca82"} Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.815563 4931 scope.go:117] "RemoveContainer" containerID="587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.845975 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.849413 4931 scope.go:117] "RemoveContainer" containerID="99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed" Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.850006 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed\": container with ID starting with 99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed not found: ID does not exist" containerID="99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.850066 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed"} err="failed to get container status \"99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed\": rpc error: code = NotFound desc = could not find container \"99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed\": container with ID starting with 99aec164fb9690cd6c1b7a53b3b877acbec5359ed456860e231836eab572d4ed not found: ID does not exist" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.850100 4931 scope.go:117] "RemoveContainer" containerID="587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8" Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.850597 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8\": container with ID starting with 587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8 not found: ID does not exist" containerID="587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.850629 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8"} err="failed to get container status \"587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8\": rpc error: code = NotFound desc = could not find container \"587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8\": container with ID starting with 587caf75c31add55f1a6a574c49914f02af25c4b0be8f6b086fffe845f649aa8 not found: ID does not exist" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.850675 4931 scope.go:117] "RemoveContainer" containerID="c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.877825 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.894592 4931 scope.go:117] "RemoveContainer" containerID="93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.898519 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.899079 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-log" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899108 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-log" Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.899125 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80061de-8d87-4c58-8733-26c5224bf03a" containerName="nova-manage" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899137 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80061de-8d87-4c58-8733-26c5224bf03a" containerName="nova-manage" Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.899174 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-log" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899186 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-log" Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.899204 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-api" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899216 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-api" Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.899231 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-metadata" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899242 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-metadata" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899578 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80061de-8d87-4c58-8733-26c5224bf03a" containerName="nova-manage" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899621 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-log" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899646 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-api" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899665 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd921adf-3fc0-4727-bf34-17203123e432" containerName="nova-metadata-metadata" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.899701 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" containerName="nova-api-log" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.901369 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.903880 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.909876 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.918108 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.930234 4931 scope.go:117] "RemoveContainer" containerID="c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.931819 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.932595 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5\": container with ID starting with c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5 not found: ID does not exist" containerID="c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.932634 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5"} err="failed to get container status \"c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5\": rpc error: code = NotFound desc = could not find container \"c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5\": container with ID starting with c0c11230e25196ea7528394085a766313743d5088ac7e8e0d9898abd0fdf47d5 not found: ID does not exist" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.932666 4931 scope.go:117] "RemoveContainer" containerID="93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb" Jan 30 06:39:08 crc kubenswrapper[4931]: E0130 06:39:08.932950 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb\": container with ID starting with 93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb not found: ID does not exist" containerID="93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.932966 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb"} err="failed to get container status \"93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb\": rpc error: code = NotFound desc = could not find container \"93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb\": container with ID starting with 93f20145e875dd7d7fbc11140ea0da414569321168d69c332aa4d305f94bbfcb not found: ID does not exist" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.940209 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.942403 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.948046 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 06:39:08 crc kubenswrapper[4931]: I0130 06:39:08.952140 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.020567 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-config-data\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.020644 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgzf\" (UniqueName: \"kubernetes.io/projected/4c56d95d-5087-41db-a759-2273aef32a3c-kube-api-access-wcgzf\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.020938 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.021096 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.021255 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2530f454-5ee2-4767-8c0b-75d50ba8a44b-logs\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.021454 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c56d95d-5087-41db-a759-2273aef32a3c-logs\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.021497 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chqpp\" (UniqueName: \"kubernetes.io/projected/2530f454-5ee2-4767-8c0b-75d50ba8a44b-kube-api-access-chqpp\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.021624 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-config-data\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.076489 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.123868 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c56d95d-5087-41db-a759-2273aef32a3c-logs\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124175 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chqpp\" (UniqueName: \"kubernetes.io/projected/2530f454-5ee2-4767-8c0b-75d50ba8a44b-kube-api-access-chqpp\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124221 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-config-data\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124277 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-config-data\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124311 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcgzf\" (UniqueName: \"kubernetes.io/projected/4c56d95d-5087-41db-a759-2273aef32a3c-kube-api-access-wcgzf\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124335 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c56d95d-5087-41db-a759-2273aef32a3c-logs\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124363 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124397 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124510 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2530f454-5ee2-4767-8c0b-75d50ba8a44b-logs\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.124888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2530f454-5ee2-4767-8c0b-75d50ba8a44b-logs\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.132340 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.133709 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-config-data\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.134747 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.139995 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-config-data\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.144320 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chqpp\" (UniqueName: \"kubernetes.io/projected/2530f454-5ee2-4767-8c0b-75d50ba8a44b-kube-api-access-chqpp\") pod \"nova-api-0\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.145374 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcgzf\" (UniqueName: \"kubernetes.io/projected/4c56d95d-5087-41db-a759-2273aef32a3c-kube-api-access-wcgzf\") pod \"nova-metadata-0\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.216614 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.225587 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-config-data\") pod \"ee55169a-5fa4-4ad5-b765-41685339650c\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.225732 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6khfr\" (UniqueName: \"kubernetes.io/projected/ee55169a-5fa4-4ad5-b765-41685339650c-kube-api-access-6khfr\") pod \"ee55169a-5fa4-4ad5-b765-41685339650c\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.225788 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-combined-ca-bundle\") pod \"ee55169a-5fa4-4ad5-b765-41685339650c\" (UID: \"ee55169a-5fa4-4ad5-b765-41685339650c\") " Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.232283 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee55169a-5fa4-4ad5-b765-41685339650c-kube-api-access-6khfr" (OuterVolumeSpecName: "kube-api-access-6khfr") pod "ee55169a-5fa4-4ad5-b765-41685339650c" (UID: "ee55169a-5fa4-4ad5-b765-41685339650c"). InnerVolumeSpecName "kube-api-access-6khfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.255366 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee55169a-5fa4-4ad5-b765-41685339650c" (UID: "ee55169a-5fa4-4ad5-b765-41685339650c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.259468 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.272635 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-config-data" (OuterVolumeSpecName: "config-data") pod "ee55169a-5fa4-4ad5-b765-41685339650c" (UID: "ee55169a-5fa4-4ad5-b765-41685339650c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.328090 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.328128 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6khfr\" (UniqueName: \"kubernetes.io/projected/ee55169a-5fa4-4ad5-b765-41685339650c-kube-api-access-6khfr\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.328143 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee55169a-5fa4-4ad5-b765-41685339650c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.443452 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7144d1e4-cf7f-4cd9-891c-02bf466f894f" path="/var/lib/kubelet/pods/7144d1e4-cf7f-4cd9-891c-02bf466f894f/volumes" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.444231 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd921adf-3fc0-4727-bf34-17203123e432" path="/var/lib/kubelet/pods/dd921adf-3fc0-4727-bf34-17203123e432/volumes" Jan 30 06:39:09 crc kubenswrapper[4931]: W0130 06:39:09.724523 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c56d95d_5087_41db_a759_2273aef32a3c.slice/crio-927c72bfe637ff62dfcafae17c76ddbeded161c27b84f0c5f70d67241dbe6fdd WatchSource:0}: Error finding container 927c72bfe637ff62dfcafae17c76ddbeded161c27b84f0c5f70d67241dbe6fdd: Status 404 returned error can't find the container with id 927c72bfe637ff62dfcafae17c76ddbeded161c27b84f0c5f70d67241dbe6fdd Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.727850 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.790854 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c56d95d-5087-41db-a759-2273aef32a3c","Type":"ContainerStarted","Data":"927c72bfe637ff62dfcafae17c76ddbeded161c27b84f0c5f70d67241dbe6fdd"} Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.792946 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.799788 4931 generic.go:334] "Generic (PLEG): container finished" podID="ee55169a-5fa4-4ad5-b765-41685339650c" containerID="0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" exitCode=0 Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.799816 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee55169a-5fa4-4ad5-b765-41685339650c","Type":"ContainerDied","Data":"0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8"} Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.799839 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee55169a-5fa4-4ad5-b765-41685339650c","Type":"ContainerDied","Data":"c43be8ffe96dc67c1c71579ad5ffebd7fcb89e53c15d4d3b57a62c61c04d179d"} Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.799855 4931 scope.go:117] "RemoveContainer" containerID="0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.799895 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:39:09 crc kubenswrapper[4931]: W0130 06:39:09.814913 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2530f454_5ee2_4767_8c0b_75d50ba8a44b.slice/crio-aef8e55688d59f4d24924520cc3d195a0fe2137c7ba9584c52f87f8f22ad3859 WatchSource:0}: Error finding container aef8e55688d59f4d24924520cc3d195a0fe2137c7ba9584c52f87f8f22ad3859: Status 404 returned error can't find the container with id aef8e55688d59f4d24924520cc3d195a0fe2137c7ba9584c52f87f8f22ad3859 Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.831177 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.832635 4931 scope.go:117] "RemoveContainer" containerID="0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" Jan 30 06:39:09 crc kubenswrapper[4931]: E0130 06:39:09.833123 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8\": container with ID starting with 0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8 not found: ID does not exist" containerID="0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.833173 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8"} err="failed to get container status \"0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8\": rpc error: code = NotFound desc = could not find container \"0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8\": container with ID starting with 0c76769cb41b7bc6ed6258448d4d6f919efef2f7b0e763d4a7ea1422d4adbfd8 not found: ID does not exist" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.853750 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.861692 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:39:09 crc kubenswrapper[4931]: E0130 06:39:09.862343 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee55169a-5fa4-4ad5-b765-41685339650c" containerName="nova-scheduler-scheduler" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.862363 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee55169a-5fa4-4ad5-b765-41685339650c" containerName="nova-scheduler-scheduler" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.862734 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee55169a-5fa4-4ad5-b765-41685339650c" containerName="nova-scheduler-scheduler" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.863799 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.868261 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.889575 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.950800 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpxfx\" (UniqueName: \"kubernetes.io/projected/0cc67f7b-ac8c-4b63-8f28-fd5135307022-kube-api-access-gpxfx\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.950847 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-config-data\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:09 crc kubenswrapper[4931]: I0130 06:39:09.950869 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.051714 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.052093 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpxfx\" (UniqueName: \"kubernetes.io/projected/0cc67f7b-ac8c-4b63-8f28-fd5135307022-kube-api-access-gpxfx\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.052119 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-config-data\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.057036 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.057094 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-config-data\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.077664 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpxfx\" (UniqueName: \"kubernetes.io/projected/0cc67f7b-ac8c-4b63-8f28-fd5135307022-kube-api-access-gpxfx\") pod \"nova-scheduler-0\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.220741 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.707744 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.816956 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c56d95d-5087-41db-a759-2273aef32a3c","Type":"ContainerStarted","Data":"7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a"} Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.817312 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c56d95d-5087-41db-a759-2273aef32a3c","Type":"ContainerStarted","Data":"a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd"} Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.818724 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cc67f7b-ac8c-4b63-8f28-fd5135307022","Type":"ContainerStarted","Data":"caa956bb861b331dfc23294a937380376a24f5f4a7dcf1c49c1dbdd00bea437a"} Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.828686 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2530f454-5ee2-4767-8c0b-75d50ba8a44b","Type":"ContainerStarted","Data":"b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155"} Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.828746 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2530f454-5ee2-4767-8c0b-75d50ba8a44b","Type":"ContainerStarted","Data":"d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc"} Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.828768 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2530f454-5ee2-4767-8c0b-75d50ba8a44b","Type":"ContainerStarted","Data":"aef8e55688d59f4d24924520cc3d195a0fe2137c7ba9584c52f87f8f22ad3859"} Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.863577 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.863546998 podStartE2EDuration="2.863546998s" podCreationTimestamp="2026-01-30 06:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:39:10.844989889 +0000 UTC m=+5486.214900156" watchObservedRunningTime="2026-01-30 06:39:10.863546998 +0000 UTC m=+5486.233457295" Jan 30 06:39:10 crc kubenswrapper[4931]: I0130 06:39:10.874708 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.87468396 podStartE2EDuration="2.87468396s" podCreationTimestamp="2026-01-30 06:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:39:10.864167456 +0000 UTC m=+5486.234077713" watchObservedRunningTime="2026-01-30 06:39:10.87468396 +0000 UTC m=+5486.244594257" Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.440767 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee55169a-5fa4-4ad5-b765-41685339650c" path="/var/lib/kubelet/pods/ee55169a-5fa4-4ad5-b765-41685339650c/volumes" Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.477662 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.477794 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.559091 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.842152 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cc67f7b-ac8c-4b63-8f28-fd5135307022","Type":"ContainerStarted","Data":"51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635"} Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.877496 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.877471444 podStartE2EDuration="2.877471444s" podCreationTimestamp="2026-01-30 06:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:39:11.874560193 +0000 UTC m=+5487.244470500" watchObservedRunningTime="2026-01-30 06:39:11.877471444 +0000 UTC m=+5487.247381741" Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.928856 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:11 crc kubenswrapper[4931]: I0130 06:39:11.991281 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47vng"] Jan 30 06:39:13 crc kubenswrapper[4931]: I0130 06:39:13.864218 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-47vng" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="registry-server" containerID="cri-o://c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1" gracePeriod=2 Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.220587 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.220994 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.367083 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.447105 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-catalog-content\") pod \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.447301 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-utilities\") pod \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.447390 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q779l\" (UniqueName: \"kubernetes.io/projected/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-kube-api-access-q779l\") pod \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\" (UID: \"cc3d76da-828d-4a79-8f4f-aa9003c7eb85\") " Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.448858 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-utilities" (OuterVolumeSpecName: "utilities") pod "cc3d76da-828d-4a79-8f4f-aa9003c7eb85" (UID: "cc3d76da-828d-4a79-8f4f-aa9003c7eb85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.453399 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-kube-api-access-q779l" (OuterVolumeSpecName: "kube-api-access-q779l") pod "cc3d76da-828d-4a79-8f4f-aa9003c7eb85" (UID: "cc3d76da-828d-4a79-8f4f-aa9003c7eb85"). InnerVolumeSpecName "kube-api-access-q779l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.502447 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc3d76da-828d-4a79-8f4f-aa9003c7eb85" (UID: "cc3d76da-828d-4a79-8f4f-aa9003c7eb85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.549846 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q779l\" (UniqueName: \"kubernetes.io/projected/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-kube-api-access-q779l\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.549875 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.549884 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc3d76da-828d-4a79-8f4f-aa9003c7eb85-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.878027 4931 generic.go:334] "Generic (PLEG): container finished" podID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerID="c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1" exitCode=0 Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.878102 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vng" event={"ID":"cc3d76da-828d-4a79-8f4f-aa9003c7eb85","Type":"ContainerDied","Data":"c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1"} Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.878115 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vng" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.878144 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vng" event={"ID":"cc3d76da-828d-4a79-8f4f-aa9003c7eb85","Type":"ContainerDied","Data":"254d4413486427e0757d7276f1aefd28a7034ac0768f441833b69cb8156807be"} Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.878175 4931 scope.go:117] "RemoveContainer" containerID="c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.908153 4931 scope.go:117] "RemoveContainer" containerID="ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.944442 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47vng"] Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.960470 4931 scope.go:117] "RemoveContainer" containerID="a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.963585 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-47vng"] Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.985370 4931 scope.go:117] "RemoveContainer" containerID="c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1" Jan 30 06:39:14 crc kubenswrapper[4931]: E0130 06:39:14.986219 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1\": container with ID starting with c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1 not found: ID does not exist" containerID="c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.986257 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1"} err="failed to get container status \"c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1\": rpc error: code = NotFound desc = could not find container \"c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1\": container with ID starting with c48f0755a45d4afa892452045777a85195356f2296938c94d478ca9d785262a1 not found: ID does not exist" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.986278 4931 scope.go:117] "RemoveContainer" containerID="ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a" Jan 30 06:39:14 crc kubenswrapper[4931]: E0130 06:39:14.986702 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a\": container with ID starting with ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a not found: ID does not exist" containerID="ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.986732 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a"} err="failed to get container status \"ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a\": rpc error: code = NotFound desc = could not find container \"ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a\": container with ID starting with ef4ac3d8b0d4e86ef467dd6f37fc4673ea797f5d3dbabc0f98fc2c28865e956a not found: ID does not exist" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.986749 4931 scope.go:117] "RemoveContainer" containerID="a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b" Jan 30 06:39:14 crc kubenswrapper[4931]: E0130 06:39:14.986984 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b\": container with ID starting with a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b not found: ID does not exist" containerID="a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b" Jan 30 06:39:14 crc kubenswrapper[4931]: I0130 06:39:14.987018 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b"} err="failed to get container status \"a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b\": rpc error: code = NotFound desc = could not find container \"a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b\": container with ID starting with a2e2cc9e9d4103ac2ed8bada89c67b32e5943424cecc01bc674e582cc8f1414b not found: ID does not exist" Jan 30 06:39:15 crc kubenswrapper[4931]: I0130 06:39:15.220838 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 06:39:15 crc kubenswrapper[4931]: I0130 06:39:15.446236 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" path="/var/lib/kubelet/pods/cc3d76da-828d-4a79-8f4f-aa9003c7eb85/volumes" Jan 30 06:39:19 crc kubenswrapper[4931]: I0130 06:39:19.217345 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 06:39:19 crc kubenswrapper[4931]: I0130 06:39:19.218084 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 06:39:19 crc kubenswrapper[4931]: I0130 06:39:19.260651 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 06:39:19 crc kubenswrapper[4931]: I0130 06:39:19.261584 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 06:39:20 crc kubenswrapper[4931]: I0130 06:39:20.221856 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 06:39:20 crc kubenswrapper[4931]: I0130 06:39:20.259259 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 06:39:20 crc kubenswrapper[4931]: I0130 06:39:20.299650 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:39:20 crc kubenswrapper[4931]: I0130 06:39:20.299695 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:39:20 crc kubenswrapper[4931]: I0130 06:39:20.381856 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:39:20 crc kubenswrapper[4931]: I0130 06:39:20.382075 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:39:20 crc kubenswrapper[4931]: I0130 06:39:20.979591 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 06:39:29 crc kubenswrapper[4931]: I0130 06:39:29.222127 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 06:39:29 crc kubenswrapper[4931]: I0130 06:39:29.224176 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 06:39:29 crc kubenswrapper[4931]: I0130 06:39:29.226239 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 06:39:29 crc kubenswrapper[4931]: I0130 06:39:29.276864 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 06:39:29 crc kubenswrapper[4931]: I0130 06:39:29.277713 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 06:39:29 crc kubenswrapper[4931]: I0130 06:39:29.277976 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 06:39:29 crc kubenswrapper[4931]: I0130 06:39:29.283338 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.065455 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.067793 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.069695 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.317289 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6b7dcd95-j4hmz"] Jan 30 06:39:30 crc kubenswrapper[4931]: E0130 06:39:30.317881 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="registry-server" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.317894 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="registry-server" Jan 30 06:39:30 crc kubenswrapper[4931]: E0130 06:39:30.317908 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="extract-content" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.317914 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="extract-content" Jan 30 06:39:30 crc kubenswrapper[4931]: E0130 06:39:30.317936 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="extract-utilities" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.317942 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="extract-utilities" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.318126 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3d76da-828d-4a79-8f4f-aa9003c7eb85" containerName="registry-server" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.318980 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.344070 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6b7dcd95-j4hmz"] Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.390618 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.390816 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhrf\" (UniqueName: \"kubernetes.io/projected/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-kube-api-access-drhrf\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.390941 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-config\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.391646 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.391748 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-dns-svc\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.495599 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.495676 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-dns-svc\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.495889 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.496138 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhrf\" (UniqueName: \"kubernetes.io/projected/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-kube-api-access-drhrf\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.496889 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.497069 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.497076 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-dns-svc\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.497290 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-config\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.497842 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-config\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.515389 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhrf\" (UniqueName: \"kubernetes.io/projected/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-kube-api-access-drhrf\") pod \"dnsmasq-dns-5b6b7dcd95-j4hmz\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:30 crc kubenswrapper[4931]: I0130 06:39:30.636853 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:31 crc kubenswrapper[4931]: I0130 06:39:31.690186 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6b7dcd95-j4hmz"] Jan 30 06:39:31 crc kubenswrapper[4931]: W0130 06:39:31.707682 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf1a68d3_4ec7_46e5_9fee_ea7f89ec8c7b.slice/crio-4b8f45f3ed9effeb333573d3d797f78260b16f7ed70972bc0ea03075747c1260 WatchSource:0}: Error finding container 4b8f45f3ed9effeb333573d3d797f78260b16f7ed70972bc0ea03075747c1260: Status 404 returned error can't find the container with id 4b8f45f3ed9effeb333573d3d797f78260b16f7ed70972bc0ea03075747c1260 Jan 30 06:39:32 crc kubenswrapper[4931]: I0130 06:39:32.089528 4931 generic.go:334] "Generic (PLEG): container finished" podID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerID="f570ad04682e1e4020b7b6f4103c4537ec3f58b31b00dec90861ecb783916f09" exitCode=0 Jan 30 06:39:32 crc kubenswrapper[4931]: I0130 06:39:32.089582 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" event={"ID":"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b","Type":"ContainerDied","Data":"f570ad04682e1e4020b7b6f4103c4537ec3f58b31b00dec90861ecb783916f09"} Jan 30 06:39:32 crc kubenswrapper[4931]: I0130 06:39:32.090101 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" event={"ID":"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b","Type":"ContainerStarted","Data":"4b8f45f3ed9effeb333573d3d797f78260b16f7ed70972bc0ea03075747c1260"} Jan 30 06:39:33 crc kubenswrapper[4931]: I0130 06:39:33.106157 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" event={"ID":"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b","Type":"ContainerStarted","Data":"88c455ebf77b0c0d27d3a79f9baa7eb72c57c08194cb775791bc835634485c4f"} Jan 30 06:39:33 crc kubenswrapper[4931]: I0130 06:39:33.106577 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:33 crc kubenswrapper[4931]: I0130 06:39:33.127106 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" podStartSLOduration=3.127075451 podStartE2EDuration="3.127075451s" podCreationTimestamp="2026-01-30 06:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:39:33.123875852 +0000 UTC m=+5508.493786139" watchObservedRunningTime="2026-01-30 06:39:33.127075451 +0000 UTC m=+5508.496985738" Jan 30 06:39:40 crc kubenswrapper[4931]: I0130 06:39:40.638673 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:39:40 crc kubenswrapper[4931]: I0130 06:39:40.713795 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67c9d4fb9c-cpnds"] Jan 30 06:39:40 crc kubenswrapper[4931]: I0130 06:39:40.714002 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" podUID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerName="dnsmasq-dns" containerID="cri-o://d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69" gracePeriod=10 Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.195357 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.201512 4931 generic.go:334] "Generic (PLEG): container finished" podID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerID="d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69" exitCode=0 Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.201547 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" event={"ID":"c8aaa63b-49f3-44c6-abe3-d24692e5894e","Type":"ContainerDied","Data":"d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69"} Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.201569 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" event={"ID":"c8aaa63b-49f3-44c6-abe3-d24692e5894e","Type":"ContainerDied","Data":"be21e08fe5735cb0ef573095e6460329d55a3b26fd373de9ad820520ace903ab"} Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.201585 4931 scope.go:117] "RemoveContainer" containerID="d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.201628 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67c9d4fb9c-cpnds" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.243082 4931 scope.go:117] "RemoveContainer" containerID="297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.283043 4931 scope.go:117] "RemoveContainer" containerID="d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69" Jan 30 06:39:41 crc kubenswrapper[4931]: E0130 06:39:41.283543 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69\": container with ID starting with d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69 not found: ID does not exist" containerID="d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.283574 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69"} err="failed to get container status \"d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69\": rpc error: code = NotFound desc = could not find container \"d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69\": container with ID starting with d1d0fb7302cb4bb01785b643544b6fa52ef364befea892a07300a916312dad69 not found: ID does not exist" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.283602 4931 scope.go:117] "RemoveContainer" containerID="297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181" Jan 30 06:39:41 crc kubenswrapper[4931]: E0130 06:39:41.283975 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181\": container with ID starting with 297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181 not found: ID does not exist" containerID="297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.284000 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181"} err="failed to get container status \"297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181\": rpc error: code = NotFound desc = could not find container \"297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181\": container with ID starting with 297ce66cdef573b34d7a957122e5080cc6a632a68b54cc34d36fdf95b8d11181 not found: ID does not exist" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.326352 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-config\") pod \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.326449 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-dns-svc\") pod \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.327270 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-sb\") pod \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.327295 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-nb\") pod \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.327328 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm6xc\" (UniqueName: \"kubernetes.io/projected/c8aaa63b-49f3-44c6-abe3-d24692e5894e-kube-api-access-hm6xc\") pod \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\" (UID: \"c8aaa63b-49f3-44c6-abe3-d24692e5894e\") " Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.335715 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8aaa63b-49f3-44c6-abe3-d24692e5894e-kube-api-access-hm6xc" (OuterVolumeSpecName: "kube-api-access-hm6xc") pod "c8aaa63b-49f3-44c6-abe3-d24692e5894e" (UID: "c8aaa63b-49f3-44c6-abe3-d24692e5894e"). InnerVolumeSpecName "kube-api-access-hm6xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.381356 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8aaa63b-49f3-44c6-abe3-d24692e5894e" (UID: "c8aaa63b-49f3-44c6-abe3-d24692e5894e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.381404 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-config" (OuterVolumeSpecName: "config") pod "c8aaa63b-49f3-44c6-abe3-d24692e5894e" (UID: "c8aaa63b-49f3-44c6-abe3-d24692e5894e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.391549 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8aaa63b-49f3-44c6-abe3-d24692e5894e" (UID: "c8aaa63b-49f3-44c6-abe3-d24692e5894e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.407093 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8aaa63b-49f3-44c6-abe3-d24692e5894e" (UID: "c8aaa63b-49f3-44c6-abe3-d24692e5894e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.428749 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.428790 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.428799 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.428810 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm6xc\" (UniqueName: \"kubernetes.io/projected/c8aaa63b-49f3-44c6-abe3-d24692e5894e-kube-api-access-hm6xc\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.428820 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8aaa63b-49f3-44c6-abe3-d24692e5894e-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.537237 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67c9d4fb9c-cpnds"] Jan 30 06:39:41 crc kubenswrapper[4931]: I0130 06:39:41.549016 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67c9d4fb9c-cpnds"] Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.432851 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" path="/var/lib/kubelet/pods/c8aaa63b-49f3-44c6-abe3-d24692e5894e/volumes" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.441107 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7fmgw"] Jan 30 06:39:43 crc kubenswrapper[4931]: E0130 06:39:43.441513 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerName="init" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.441535 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerName="init" Jan 30 06:39:43 crc kubenswrapper[4931]: E0130 06:39:43.441570 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerName="dnsmasq-dns" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.441577 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerName="dnsmasq-dns" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.441728 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8aaa63b-49f3-44c6-abe3-d24692e5894e" containerName="dnsmasq-dns" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.442292 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.458629 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7fmgw"] Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.478286 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7909a37-a194-4666-b642-8193c2b8e29c-operator-scripts\") pod \"cinder-db-create-7fmgw\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.478366 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gxb\" (UniqueName: \"kubernetes.io/projected/a7909a37-a194-4666-b642-8193c2b8e29c-kube-api-access-f7gxb\") pod \"cinder-db-create-7fmgw\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.550902 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f264-account-create-update-hm2jw"] Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.552245 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.558548 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f264-account-create-update-hm2jw"] Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.558764 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.580664 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7909a37-a194-4666-b642-8193c2b8e29c-operator-scripts\") pod \"cinder-db-create-7fmgw\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.580733 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gxb\" (UniqueName: \"kubernetes.io/projected/a7909a37-a194-4666-b642-8193c2b8e29c-kube-api-access-f7gxb\") pod \"cinder-db-create-7fmgw\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.581446 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7909a37-a194-4666-b642-8193c2b8e29c-operator-scripts\") pod \"cinder-db-create-7fmgw\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.611503 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gxb\" (UniqueName: \"kubernetes.io/projected/a7909a37-a194-4666-b642-8193c2b8e29c-kube-api-access-f7gxb\") pod \"cinder-db-create-7fmgw\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.682194 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bda186-cc7a-4422-8266-5f494795cf7f-operator-scripts\") pod \"cinder-f264-account-create-update-hm2jw\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.682275 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rpb\" (UniqueName: \"kubernetes.io/projected/44bda186-cc7a-4422-8266-5f494795cf7f-kube-api-access-68rpb\") pod \"cinder-f264-account-create-update-hm2jw\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.759275 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.784493 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rpb\" (UniqueName: \"kubernetes.io/projected/44bda186-cc7a-4422-8266-5f494795cf7f-kube-api-access-68rpb\") pod \"cinder-f264-account-create-update-hm2jw\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.784747 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bda186-cc7a-4422-8266-5f494795cf7f-operator-scripts\") pod \"cinder-f264-account-create-update-hm2jw\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.788048 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bda186-cc7a-4422-8266-5f494795cf7f-operator-scripts\") pod \"cinder-f264-account-create-update-hm2jw\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.806794 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rpb\" (UniqueName: \"kubernetes.io/projected/44bda186-cc7a-4422-8266-5f494795cf7f-kube-api-access-68rpb\") pod \"cinder-f264-account-create-update-hm2jw\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:43 crc kubenswrapper[4931]: I0130 06:39:43.871011 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:44 crc kubenswrapper[4931]: I0130 06:39:44.235875 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f264-account-create-update-hm2jw"] Jan 30 06:39:44 crc kubenswrapper[4931]: I0130 06:39:44.246710 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f264-account-create-update-hm2jw" event={"ID":"44bda186-cc7a-4422-8266-5f494795cf7f","Type":"ContainerStarted","Data":"dd5867b3c6aa732ef49ab6ef7d805c2250f0d888186dd17b002ffcc4871b0ba1"} Jan 30 06:39:44 crc kubenswrapper[4931]: I0130 06:39:44.286509 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7fmgw"] Jan 30 06:39:45 crc kubenswrapper[4931]: I0130 06:39:45.262487 4931 generic.go:334] "Generic (PLEG): container finished" podID="44bda186-cc7a-4422-8266-5f494795cf7f" containerID="956ce554bd663761599c9dc4f978e7719f40043720c3d10db30cc18c76ff6127" exitCode=0 Jan 30 06:39:45 crc kubenswrapper[4931]: I0130 06:39:45.262610 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f264-account-create-update-hm2jw" event={"ID":"44bda186-cc7a-4422-8266-5f494795cf7f","Type":"ContainerDied","Data":"956ce554bd663761599c9dc4f978e7719f40043720c3d10db30cc18c76ff6127"} Jan 30 06:39:45 crc kubenswrapper[4931]: I0130 06:39:45.267631 4931 generic.go:334] "Generic (PLEG): container finished" podID="a7909a37-a194-4666-b642-8193c2b8e29c" containerID="cff7e0d64b5667667e85a8a7d8d6d557567a72e224933981bb30fb75cc9c37a5" exitCode=0 Jan 30 06:39:45 crc kubenswrapper[4931]: I0130 06:39:45.267672 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7fmgw" event={"ID":"a7909a37-a194-4666-b642-8193c2b8e29c","Type":"ContainerDied","Data":"cff7e0d64b5667667e85a8a7d8d6d557567a72e224933981bb30fb75cc9c37a5"} Jan 30 06:39:45 crc kubenswrapper[4931]: I0130 06:39:45.267697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7fmgw" event={"ID":"a7909a37-a194-4666-b642-8193c2b8e29c","Type":"ContainerStarted","Data":"6b2dd84b807447d9c8b727f554e36988b6bedbe41f7dd0abb738cb7ae80014fc"} Jan 30 06:39:46 crc kubenswrapper[4931]: I0130 06:39:46.934000 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:46 crc kubenswrapper[4931]: I0130 06:39:46.941558 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.048536 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7909a37-a194-4666-b642-8193c2b8e29c-operator-scripts\") pod \"a7909a37-a194-4666-b642-8193c2b8e29c\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.048711 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7gxb\" (UniqueName: \"kubernetes.io/projected/a7909a37-a194-4666-b642-8193c2b8e29c-kube-api-access-f7gxb\") pod \"a7909a37-a194-4666-b642-8193c2b8e29c\" (UID: \"a7909a37-a194-4666-b642-8193c2b8e29c\") " Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.048911 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rpb\" (UniqueName: \"kubernetes.io/projected/44bda186-cc7a-4422-8266-5f494795cf7f-kube-api-access-68rpb\") pod \"44bda186-cc7a-4422-8266-5f494795cf7f\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.048951 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bda186-cc7a-4422-8266-5f494795cf7f-operator-scripts\") pod \"44bda186-cc7a-4422-8266-5f494795cf7f\" (UID: \"44bda186-cc7a-4422-8266-5f494795cf7f\") " Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.049144 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7909a37-a194-4666-b642-8193c2b8e29c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7909a37-a194-4666-b642-8193c2b8e29c" (UID: "a7909a37-a194-4666-b642-8193c2b8e29c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.049613 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7909a37-a194-4666-b642-8193c2b8e29c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.050026 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44bda186-cc7a-4422-8266-5f494795cf7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44bda186-cc7a-4422-8266-5f494795cf7f" (UID: "44bda186-cc7a-4422-8266-5f494795cf7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.053914 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7909a37-a194-4666-b642-8193c2b8e29c-kube-api-access-f7gxb" (OuterVolumeSpecName: "kube-api-access-f7gxb") pod "a7909a37-a194-4666-b642-8193c2b8e29c" (UID: "a7909a37-a194-4666-b642-8193c2b8e29c"). InnerVolumeSpecName "kube-api-access-f7gxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.055851 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bda186-cc7a-4422-8266-5f494795cf7f-kube-api-access-68rpb" (OuterVolumeSpecName: "kube-api-access-68rpb") pod "44bda186-cc7a-4422-8266-5f494795cf7f" (UID: "44bda186-cc7a-4422-8266-5f494795cf7f"). InnerVolumeSpecName "kube-api-access-68rpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.151345 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7gxb\" (UniqueName: \"kubernetes.io/projected/a7909a37-a194-4666-b642-8193c2b8e29c-kube-api-access-f7gxb\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.151398 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rpb\" (UniqueName: \"kubernetes.io/projected/44bda186-cc7a-4422-8266-5f494795cf7f-kube-api-access-68rpb\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.151443 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44bda186-cc7a-4422-8266-5f494795cf7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.291609 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f264-account-create-update-hm2jw" event={"ID":"44bda186-cc7a-4422-8266-5f494795cf7f","Type":"ContainerDied","Data":"dd5867b3c6aa732ef49ab6ef7d805c2250f0d888186dd17b002ffcc4871b0ba1"} Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.291646 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd5867b3c6aa732ef49ab6ef7d805c2250f0d888186dd17b002ffcc4871b0ba1" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.291659 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f264-account-create-update-hm2jw" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.293784 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7fmgw" event={"ID":"a7909a37-a194-4666-b642-8193c2b8e29c","Type":"ContainerDied","Data":"6b2dd84b807447d9c8b727f554e36988b6bedbe41f7dd0abb738cb7ae80014fc"} Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.293806 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b2dd84b807447d9c8b727f554e36988b6bedbe41f7dd0abb738cb7ae80014fc" Jan 30 06:39:47 crc kubenswrapper[4931]: I0130 06:39:47.293873 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7fmgw" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.725333 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tfpsx"] Jan 30 06:39:48 crc kubenswrapper[4931]: E0130 06:39:48.725903 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7909a37-a194-4666-b642-8193c2b8e29c" containerName="mariadb-database-create" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.725926 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7909a37-a194-4666-b642-8193c2b8e29c" containerName="mariadb-database-create" Jan 30 06:39:48 crc kubenswrapper[4931]: E0130 06:39:48.725969 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44bda186-cc7a-4422-8266-5f494795cf7f" containerName="mariadb-account-create-update" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.725981 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="44bda186-cc7a-4422-8266-5f494795cf7f" containerName="mariadb-account-create-update" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.726276 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7909a37-a194-4666-b642-8193c2b8e29c" containerName="mariadb-database-create" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.726312 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="44bda186-cc7a-4422-8266-5f494795cf7f" containerName="mariadb-account-create-update" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.727314 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.732406 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.732963 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-499hm" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.733193 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.743308 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tfpsx"] Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.898256 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzqjk\" (UniqueName: \"kubernetes.io/projected/de0677a1-9051-4719-9e4e-142694e6683a-kube-api-access-jzqjk\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.899000 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-db-sync-config-data\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.899378 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-config-data\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.899560 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-combined-ca-bundle\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.899769 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-scripts\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:48 crc kubenswrapper[4931]: I0130 06:39:48.899832 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0677a1-9051-4719-9e4e-142694e6683a-etc-machine-id\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.001849 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-config-data\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.001997 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-combined-ca-bundle\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.002155 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-scripts\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.002212 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0677a1-9051-4719-9e4e-142694e6683a-etc-machine-id\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.002354 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzqjk\" (UniqueName: \"kubernetes.io/projected/de0677a1-9051-4719-9e4e-142694e6683a-kube-api-access-jzqjk\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.002466 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-db-sync-config-data\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.002487 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0677a1-9051-4719-9e4e-142694e6683a-etc-machine-id\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.011025 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-config-data\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.014284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-combined-ca-bundle\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.015131 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-scripts\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.015257 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-db-sync-config-data\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.033062 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzqjk\" (UniqueName: \"kubernetes.io/projected/de0677a1-9051-4719-9e4e-142694e6683a-kube-api-access-jzqjk\") pod \"cinder-db-sync-tfpsx\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.056334 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:49 crc kubenswrapper[4931]: I0130 06:39:49.571632 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tfpsx"] Jan 30 06:39:49 crc kubenswrapper[4931]: W0130 06:39:49.578161 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde0677a1_9051_4719_9e4e_142694e6683a.slice/crio-02130b7da365af6fbb4827061845a7d662a4972437bcbee99c02f1e731fe6ada WatchSource:0}: Error finding container 02130b7da365af6fbb4827061845a7d662a4972437bcbee99c02f1e731fe6ada: Status 404 returned error can't find the container with id 02130b7da365af6fbb4827061845a7d662a4972437bcbee99c02f1e731fe6ada Jan 30 06:39:50 crc kubenswrapper[4931]: I0130 06:39:50.320254 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tfpsx" event={"ID":"de0677a1-9051-4719-9e4e-142694e6683a","Type":"ContainerStarted","Data":"02130b7da365af6fbb4827061845a7d662a4972437bcbee99c02f1e731fe6ada"} Jan 30 06:39:51 crc kubenswrapper[4931]: I0130 06:39:51.337393 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tfpsx" event={"ID":"de0677a1-9051-4719-9e4e-142694e6683a","Type":"ContainerStarted","Data":"18638ce6c93ee0d191ba3ee6b587a88fb1ae5413bad9f52cba0bc5cd608d3a29"} Jan 30 06:39:51 crc kubenswrapper[4931]: I0130 06:39:51.374986 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tfpsx" podStartSLOduration=3.374958873 podStartE2EDuration="3.374958873s" podCreationTimestamp="2026-01-30 06:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:39:51.361016153 +0000 UTC m=+5526.730926450" watchObservedRunningTime="2026-01-30 06:39:51.374958873 +0000 UTC m=+5526.744869170" Jan 30 06:39:53 crc kubenswrapper[4931]: I0130 06:39:53.369485 4931 generic.go:334] "Generic (PLEG): container finished" podID="de0677a1-9051-4719-9e4e-142694e6683a" containerID="18638ce6c93ee0d191ba3ee6b587a88fb1ae5413bad9f52cba0bc5cd608d3a29" exitCode=0 Jan 30 06:39:53 crc kubenswrapper[4931]: I0130 06:39:53.369546 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tfpsx" event={"ID":"de0677a1-9051-4719-9e4e-142694e6683a","Type":"ContainerDied","Data":"18638ce6c93ee0d191ba3ee6b587a88fb1ae5413bad9f52cba0bc5cd608d3a29"} Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.813195 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.966167 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-db-sync-config-data\") pod \"de0677a1-9051-4719-9e4e-142694e6683a\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.966275 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-config-data\") pod \"de0677a1-9051-4719-9e4e-142694e6683a\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.966328 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-combined-ca-bundle\") pod \"de0677a1-9051-4719-9e4e-142694e6683a\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.966371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzqjk\" (UniqueName: \"kubernetes.io/projected/de0677a1-9051-4719-9e4e-142694e6683a-kube-api-access-jzqjk\") pod \"de0677a1-9051-4719-9e4e-142694e6683a\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.966414 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-scripts\") pod \"de0677a1-9051-4719-9e4e-142694e6683a\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.966477 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0677a1-9051-4719-9e4e-142694e6683a-etc-machine-id\") pod \"de0677a1-9051-4719-9e4e-142694e6683a\" (UID: \"de0677a1-9051-4719-9e4e-142694e6683a\") " Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.967062 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de0677a1-9051-4719-9e4e-142694e6683a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "de0677a1-9051-4719-9e4e-142694e6683a" (UID: "de0677a1-9051-4719-9e4e-142694e6683a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.973519 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-scripts" (OuterVolumeSpecName: "scripts") pod "de0677a1-9051-4719-9e4e-142694e6683a" (UID: "de0677a1-9051-4719-9e4e-142694e6683a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.973661 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "de0677a1-9051-4719-9e4e-142694e6683a" (UID: "de0677a1-9051-4719-9e4e-142694e6683a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:54 crc kubenswrapper[4931]: I0130 06:39:54.973881 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0677a1-9051-4719-9e4e-142694e6683a-kube-api-access-jzqjk" (OuterVolumeSpecName: "kube-api-access-jzqjk") pod "de0677a1-9051-4719-9e4e-142694e6683a" (UID: "de0677a1-9051-4719-9e4e-142694e6683a"). InnerVolumeSpecName "kube-api-access-jzqjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.015694 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-config-data" (OuterVolumeSpecName: "config-data") pod "de0677a1-9051-4719-9e4e-142694e6683a" (UID: "de0677a1-9051-4719-9e4e-142694e6683a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.022908 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de0677a1-9051-4719-9e4e-142694e6683a" (UID: "de0677a1-9051-4719-9e4e-142694e6683a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.069519 4931 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.069562 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.069576 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.069591 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzqjk\" (UniqueName: \"kubernetes.io/projected/de0677a1-9051-4719-9e4e-142694e6683a-kube-api-access-jzqjk\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.069604 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0677a1-9051-4719-9e4e-142694e6683a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.069617 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0677a1-9051-4719-9e4e-142694e6683a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.390843 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tfpsx" event={"ID":"de0677a1-9051-4719-9e4e-142694e6683a","Type":"ContainerDied","Data":"02130b7da365af6fbb4827061845a7d662a4972437bcbee99c02f1e731fe6ada"} Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.390878 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02130b7da365af6fbb4827061845a7d662a4972437bcbee99c02f1e731fe6ada" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.390942 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tfpsx" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.768006 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5879d4f7c5-x7dw2"] Jan 30 06:39:55 crc kubenswrapper[4931]: E0130 06:39:55.769355 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0677a1-9051-4719-9e4e-142694e6683a" containerName="cinder-db-sync" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.769391 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0677a1-9051-4719-9e4e-142694e6683a" containerName="cinder-db-sync" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.774293 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0677a1-9051-4719-9e4e-142694e6683a" containerName="cinder-db-sync" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.785570 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.787456 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-nb\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.787596 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn9tk\" (UniqueName: \"kubernetes.io/projected/214b78b9-e769-4474-be87-e9b494c2fa69-kube-api-access-pn9tk\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.787689 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-dns-svc\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.787715 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-config\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.787847 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-sb\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.795374 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5879d4f7c5-x7dw2"] Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.890351 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn9tk\" (UniqueName: \"kubernetes.io/projected/214b78b9-e769-4474-be87-e9b494c2fa69-kube-api-access-pn9tk\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.890781 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-dns-svc\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.890805 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-config\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.890866 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-sb\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.890927 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-nb\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.891932 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-config\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.892626 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-sb\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.893784 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-nb\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.893939 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-dns-svc\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:55 crc kubenswrapper[4931]: I0130 06:39:55.923258 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn9tk\" (UniqueName: \"kubernetes.io/projected/214b78b9-e769-4474-be87-e9b494c2fa69-kube-api-access-pn9tk\") pod \"dnsmasq-dns-5879d4f7c5-x7dw2\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.088902 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.092299 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.093223 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data-custom\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.093280 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.093325 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d301dc-5f68-4e1b-ae27-51aa02e45789-logs\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.093410 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv5dr\" (UniqueName: \"kubernetes.io/projected/91d301dc-5f68-4e1b-ae27-51aa02e45789-kube-api-access-lv5dr\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.093533 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.093573 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91d301dc-5f68-4e1b-ae27-51aa02e45789-etc-machine-id\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.093625 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-scripts\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.096648 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.096802 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-499hm" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.096907 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.097122 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.112923 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.119068 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.194925 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-scripts\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195027 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data-custom\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195058 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195092 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d301dc-5f68-4e1b-ae27-51aa02e45789-logs\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195119 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv5dr\" (UniqueName: \"kubernetes.io/projected/91d301dc-5f68-4e1b-ae27-51aa02e45789-kube-api-access-lv5dr\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195136 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195152 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91d301dc-5f68-4e1b-ae27-51aa02e45789-etc-machine-id\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195261 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91d301dc-5f68-4e1b-ae27-51aa02e45789-etc-machine-id\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.195967 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d301dc-5f68-4e1b-ae27-51aa02e45789-logs\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.200848 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-scripts\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.201204 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.202271 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data-custom\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.205580 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.210304 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv5dr\" (UniqueName: \"kubernetes.io/projected/91d301dc-5f68-4e1b-ae27-51aa02e45789-kube-api-access-lv5dr\") pod \"cinder-api-0\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.413680 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.638487 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5879d4f7c5-x7dw2"] Jan 30 06:39:56 crc kubenswrapper[4931]: I0130 06:39:56.703199 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:39:57 crc kubenswrapper[4931]: I0130 06:39:57.429661 4931 generic.go:334] "Generic (PLEG): container finished" podID="214b78b9-e769-4474-be87-e9b494c2fa69" containerID="45cf3829eaba7efc9ffdbde5fa46c91facdbe555edf8963708f266596e0113d9" exitCode=0 Jan 30 06:39:57 crc kubenswrapper[4931]: I0130 06:39:57.437550 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" event={"ID":"214b78b9-e769-4474-be87-e9b494c2fa69","Type":"ContainerDied","Data":"45cf3829eaba7efc9ffdbde5fa46c91facdbe555edf8963708f266596e0113d9"} Jan 30 06:39:57 crc kubenswrapper[4931]: I0130 06:39:57.437606 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" event={"ID":"214b78b9-e769-4474-be87-e9b494c2fa69","Type":"ContainerStarted","Data":"1cea5345d991653f1a6830de732c35c4b2f81ed6821f46e956ec8f3a43e28720"} Jan 30 06:39:57 crc kubenswrapper[4931]: I0130 06:39:57.437627 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"91d301dc-5f68-4e1b-ae27-51aa02e45789","Type":"ContainerStarted","Data":"63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a"} Jan 30 06:39:57 crc kubenswrapper[4931]: I0130 06:39:57.437648 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"91d301dc-5f68-4e1b-ae27-51aa02e45789","Type":"ContainerStarted","Data":"845bdc0d8784551cc86053ca285aa0afc7bce0f017005659d5e194550515ea02"} Jan 30 06:39:58 crc kubenswrapper[4931]: I0130 06:39:58.477752 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" event={"ID":"214b78b9-e769-4474-be87-e9b494c2fa69","Type":"ContainerStarted","Data":"1b21e024974dbffc1b686bd5f52316fe76f211ede1a6fa05295886b31dbd35b5"} Jan 30 06:39:58 crc kubenswrapper[4931]: I0130 06:39:58.480611 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:39:58 crc kubenswrapper[4931]: I0130 06:39:58.497481 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"91d301dc-5f68-4e1b-ae27-51aa02e45789","Type":"ContainerStarted","Data":"fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71"} Jan 30 06:39:58 crc kubenswrapper[4931]: I0130 06:39:58.498549 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 06:39:58 crc kubenswrapper[4931]: I0130 06:39:58.531245 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" podStartSLOduration=3.531226427 podStartE2EDuration="3.531226427s" podCreationTimestamp="2026-01-30 06:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:39:58.516795173 +0000 UTC m=+5533.886705440" watchObservedRunningTime="2026-01-30 06:39:58.531226427 +0000 UTC m=+5533.901136684" Jan 30 06:39:58 crc kubenswrapper[4931]: I0130 06:39:58.567923 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.567898903 podStartE2EDuration="2.567898903s" podCreationTimestamp="2026-01-30 06:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:39:58.551799433 +0000 UTC m=+5533.921709690" watchObservedRunningTime="2026-01-30 06:39:58.567898903 +0000 UTC m=+5533.937809160" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.121625 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.203178 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6b7dcd95-j4hmz"] Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.203469 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" podUID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerName="dnsmasq-dns" containerID="cri-o://88c455ebf77b0c0d27d3a79f9baa7eb72c57c08194cb775791bc835634485c4f" gracePeriod=10 Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.593376 4931 generic.go:334] "Generic (PLEG): container finished" podID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerID="88c455ebf77b0c0d27d3a79f9baa7eb72c57c08194cb775791bc835634485c4f" exitCode=0 Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.593647 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" event={"ID":"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b","Type":"ContainerDied","Data":"88c455ebf77b0c0d27d3a79f9baa7eb72c57c08194cb775791bc835634485c4f"} Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.752345 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.845346 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-dns-svc\") pod \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.846371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-config\") pod \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.846476 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-nb\") pod \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.846593 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drhrf\" (UniqueName: \"kubernetes.io/projected/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-kube-api-access-drhrf\") pod \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.846664 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-sb\") pod \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\" (UID: \"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b\") " Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.865660 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-kube-api-access-drhrf" (OuterVolumeSpecName: "kube-api-access-drhrf") pod "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" (UID: "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b"). InnerVolumeSpecName "kube-api-access-drhrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.889172 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" (UID: "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.911089 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" (UID: "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.913039 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" (UID: "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.918040 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-config" (OuterVolumeSpecName: "config") pod "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" (UID: "bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.948989 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drhrf\" (UniqueName: \"kubernetes.io/projected/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-kube-api-access-drhrf\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.949026 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.949043 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.949057 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:06 crc kubenswrapper[4931]: I0130 06:40:06.949068 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.602455 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" event={"ID":"bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b","Type":"ContainerDied","Data":"4b8f45f3ed9effeb333573d3d797f78260b16f7ed70972bc0ea03075747c1260"} Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.602754 4931 scope.go:117] "RemoveContainer" containerID="88c455ebf77b0c0d27d3a79f9baa7eb72c57c08194cb775791bc835634485c4f" Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.602562 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6b7dcd95-j4hmz" Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.631703 4931 scope.go:117] "RemoveContainer" containerID="f570ad04682e1e4020b7b6f4103c4537ec3f58b31b00dec90861ecb783916f09" Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.632912 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6b7dcd95-j4hmz"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.654087 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6b7dcd95-j4hmz"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.727690 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.727932 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-log" containerID="cri-o://d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc" gracePeriod=30 Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.728356 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-api" containerID="cri-o://b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155" gracePeriod=30 Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.736494 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.736695 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-log" containerID="cri-o://a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd" gracePeriod=30 Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.737057 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-metadata" containerID="cri-o://7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a" gracePeriod=30 Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.747474 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.747710 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2e99598c-cc27-462b-8c5b-9647fdc031dc" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a67c2af59ce774fac5d99d16e2c4d0308297f692031a8a81e468f5bef97702ac" gracePeriod=30 Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.755583 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.755969 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0cc67f7b-ac8c-4b63-8f28-fd5135307022" containerName="nova-scheduler-scheduler" containerID="cri-o://51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635" gracePeriod=30 Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.763659 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.763868 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dc16d452-7a63-4d86-b729-2f7384b3ea73" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53" gracePeriod=30 Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.801637 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:40:07 crc kubenswrapper[4931]: I0130 06:40:07.801827 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="c9b04495-2e29-4188-adbe-e6ed3669c25a" containerName="nova-cell1-conductor-conductor" containerID="cri-o://7956c67be4873a213d6ce531a234902ab8e420fef8d81bd2f9cc50a55b2ed19e" gracePeriod=30 Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.608749 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.612246 4931 generic.go:334] "Generic (PLEG): container finished" podID="4c56d95d-5087-41db-a759-2273aef32a3c" containerID="a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd" exitCode=143 Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.612285 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c56d95d-5087-41db-a759-2273aef32a3c","Type":"ContainerDied","Data":"a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd"} Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.615995 4931 generic.go:334] "Generic (PLEG): container finished" podID="dc16d452-7a63-4d86-b729-2f7384b3ea73" containerID="781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53" exitCode=0 Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.616055 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc16d452-7a63-4d86-b729-2f7384b3ea73","Type":"ContainerDied","Data":"781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53"} Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.616100 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc16d452-7a63-4d86-b729-2f7384b3ea73","Type":"ContainerDied","Data":"be2cb5795144a0336c26c2ce840d01e8f6b40f1134f0aca0ca6716edd8f9b6e4"} Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.616117 4931 scope.go:117] "RemoveContainer" containerID="781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.616230 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.619955 4931 generic.go:334] "Generic (PLEG): container finished" podID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerID="d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc" exitCode=143 Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.620020 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2530f454-5ee2-4767-8c0b-75d50ba8a44b","Type":"ContainerDied","Data":"d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc"} Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.622497 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.656903 4931 scope.go:117] "RemoveContainer" containerID="781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53" Jan 30 06:40:08 crc kubenswrapper[4931]: E0130 06:40:08.657243 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53\": container with ID starting with 781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53 not found: ID does not exist" containerID="781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.657272 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53"} err="failed to get container status \"781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53\": rpc error: code = NotFound desc = could not find container \"781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53\": container with ID starting with 781c9f4814f32596f45ca8f36783e39cf9745733137e6b3a5832e087c7162f53 not found: ID does not exist" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.683839 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-config-data\") pod \"dc16d452-7a63-4d86-b729-2f7384b3ea73\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.683931 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-combined-ca-bundle\") pod \"dc16d452-7a63-4d86-b729-2f7384b3ea73\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.684013 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmssh\" (UniqueName: \"kubernetes.io/projected/dc16d452-7a63-4d86-b729-2f7384b3ea73-kube-api-access-vmssh\") pod \"dc16d452-7a63-4d86-b729-2f7384b3ea73\" (UID: \"dc16d452-7a63-4d86-b729-2f7384b3ea73\") " Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.714796 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc16d452-7a63-4d86-b729-2f7384b3ea73-kube-api-access-vmssh" (OuterVolumeSpecName: "kube-api-access-vmssh") pod "dc16d452-7a63-4d86-b729-2f7384b3ea73" (UID: "dc16d452-7a63-4d86-b729-2f7384b3ea73"). InnerVolumeSpecName "kube-api-access-vmssh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.735868 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-config-data" (OuterVolumeSpecName: "config-data") pod "dc16d452-7a63-4d86-b729-2f7384b3ea73" (UID: "dc16d452-7a63-4d86-b729-2f7384b3ea73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.753399 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc16d452-7a63-4d86-b729-2f7384b3ea73" (UID: "dc16d452-7a63-4d86-b729-2f7384b3ea73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.785645 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.785684 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc16d452-7a63-4d86-b729-2f7384b3ea73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.785695 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmssh\" (UniqueName: \"kubernetes.io/projected/dc16d452-7a63-4d86-b729-2f7384b3ea73-kube-api-access-vmssh\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.960695 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:40:08 crc kubenswrapper[4931]: I0130 06:40:08.982827 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:08.999620 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: E0130 06:40:08.999974 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc16d452-7a63-4d86-b729-2f7384b3ea73" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:08.999988 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc16d452-7a63-4d86-b729-2f7384b3ea73" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 06:40:09 crc kubenswrapper[4931]: E0130 06:40:09.000003 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerName="init" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.000009 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerName="init" Jan 30 06:40:09 crc kubenswrapper[4931]: E0130 06:40:09.000028 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerName="dnsmasq-dns" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.000035 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerName="dnsmasq-dns" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.000211 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" containerName="dnsmasq-dns" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.000225 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc16d452-7a63-4d86-b729-2f7384b3ea73" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.000850 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.004462 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.010741 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.090570 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e49a5c-323c-46de-b34f-2fef9465e277-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.090732 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9txz\" (UniqueName: \"kubernetes.io/projected/c9e49a5c-323c-46de-b34f-2fef9465e277-kube-api-access-d9txz\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.090785 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e49a5c-323c-46de-b34f-2fef9465e277-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.109303 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.195041 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9txz\" (UniqueName: \"kubernetes.io/projected/c9e49a5c-323c-46de-b34f-2fef9465e277-kube-api-access-d9txz\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.195089 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e49a5c-323c-46de-b34f-2fef9465e277-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.195111 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e49a5c-323c-46de-b34f-2fef9465e277-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.201539 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9e49a5c-323c-46de-b34f-2fef9465e277-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.201667 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9e49a5c-323c-46de-b34f-2fef9465e277-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.210188 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9txz\" (UniqueName: \"kubernetes.io/projected/c9e49a5c-323c-46de-b34f-2fef9465e277-kube-api-access-d9txz\") pod \"nova-cell1-novncproxy-0\" (UID: \"c9e49a5c-323c-46de-b34f-2fef9465e277\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.296199 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpxfx\" (UniqueName: \"kubernetes.io/projected/0cc67f7b-ac8c-4b63-8f28-fd5135307022-kube-api-access-gpxfx\") pod \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.296319 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-config-data\") pod \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.296454 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-combined-ca-bundle\") pod \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\" (UID: \"0cc67f7b-ac8c-4b63-8f28-fd5135307022\") " Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.300564 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc67f7b-ac8c-4b63-8f28-fd5135307022-kube-api-access-gpxfx" (OuterVolumeSpecName: "kube-api-access-gpxfx") pod "0cc67f7b-ac8c-4b63-8f28-fd5135307022" (UID: "0cc67f7b-ac8c-4b63-8f28-fd5135307022"). InnerVolumeSpecName "kube-api-access-gpxfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.321714 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.333647 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-config-data" (OuterVolumeSpecName: "config-data") pod "0cc67f7b-ac8c-4b63-8f28-fd5135307022" (UID: "0cc67f7b-ac8c-4b63-8f28-fd5135307022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.350627 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0cc67f7b-ac8c-4b63-8f28-fd5135307022" (UID: "0cc67f7b-ac8c-4b63-8f28-fd5135307022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.400269 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpxfx\" (UniqueName: \"kubernetes.io/projected/0cc67f7b-ac8c-4b63-8f28-fd5135307022-kube-api-access-gpxfx\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.400304 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.400316 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cc67f7b-ac8c-4b63-8f28-fd5135307022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.445016 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b" path="/var/lib/kubelet/pods/bf1a68d3-4ec7-46e5-9fee-ea7f89ec8c7b/volumes" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.445781 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc16d452-7a63-4d86-b729-2f7384b3ea73" path="/var/lib/kubelet/pods/dc16d452-7a63-4d86-b729-2f7384b3ea73/volumes" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.649892 4931 generic.go:334] "Generic (PLEG): container finished" podID="0cc67f7b-ac8c-4b63-8f28-fd5135307022" containerID="51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635" exitCode=0 Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.649978 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.650002 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cc67f7b-ac8c-4b63-8f28-fd5135307022","Type":"ContainerDied","Data":"51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635"} Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.650860 4931 scope.go:117] "RemoveContainer" containerID="51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.650748 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0cc67f7b-ac8c-4b63-8f28-fd5135307022","Type":"ContainerDied","Data":"caa956bb861b331dfc23294a937380376a24f5f4a7dcf1c49c1dbdd00bea437a"} Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.678457 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.690561 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.691019 4931 scope.go:117] "RemoveContainer" containerID="51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635" Jan 30 06:40:09 crc kubenswrapper[4931]: E0130 06:40:09.692275 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635\": container with ID starting with 51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635 not found: ID does not exist" containerID="51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.692319 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635"} err="failed to get container status \"51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635\": rpc error: code = NotFound desc = could not find container \"51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635\": container with ID starting with 51fe2630264221f4af34441e598c9cd1502c1a46402bb5792e5a94d0c5b84635 not found: ID does not exist" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.696017 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: E0130 06:40:09.696453 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc67f7b-ac8c-4b63-8f28-fd5135307022" containerName="nova-scheduler-scheduler" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.696468 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc67f7b-ac8c-4b63-8f28-fd5135307022" containerName="nova-scheduler-scheduler" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.696651 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc67f7b-ac8c-4b63-8f28-fd5135307022" containerName="nova-scheduler-scheduler" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.699483 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.701696 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.704352 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.806034 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-config-data\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.806329 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxvrc\" (UniqueName: \"kubernetes.io/projected/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-kube-api-access-dxvrc\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.806510 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.853243 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 06:40:09 crc kubenswrapper[4931]: W0130 06:40:09.860524 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9e49a5c_323c_46de_b34f_2fef9465e277.slice/crio-c0dd31a4534f326db817c22d6807a0e67f9ac0247280e5e3a530799c96005dac WatchSource:0}: Error finding container c0dd31a4534f326db817c22d6807a0e67f9ac0247280e5e3a530799c96005dac: Status 404 returned error can't find the container with id c0dd31a4534f326db817c22d6807a0e67f9ac0247280e5e3a530799c96005dac Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.908178 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-config-data\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.908722 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxvrc\" (UniqueName: \"kubernetes.io/projected/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-kube-api-access-dxvrc\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.908773 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.915102 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-config-data\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.923729 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:09 crc kubenswrapper[4931]: I0130 06:40:09.927109 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxvrc\" (UniqueName: \"kubernetes.io/projected/ee8c9751-e3b5-4031-bb46-a7e5fae46f4e-kube-api-access-dxvrc\") pod \"nova-scheduler-0\" (UID: \"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e\") " pod="openstack/nova-scheduler-0" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.016696 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.491964 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.668255 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9e49a5c-323c-46de-b34f-2fef9465e277","Type":"ContainerStarted","Data":"ee6b7b39bb21791ee55bb92d2b9260f291bcc5fdcb1a92e68a988767da00ca1b"} Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.668660 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c9e49a5c-323c-46de-b34f-2fef9465e277","Type":"ContainerStarted","Data":"c0dd31a4534f326db817c22d6807a0e67f9ac0247280e5e3a530799c96005dac"} Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.678500 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e","Type":"ContainerStarted","Data":"d547cfddc008b2b031e47460775030aadad9c1e280fedd069542e543986c0052"} Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.688304 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.688289702 podStartE2EDuration="2.688289702s" podCreationTimestamp="2026-01-30 06:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:10.684042673 +0000 UTC m=+5546.053952940" watchObservedRunningTime="2026-01-30 06:40:10.688289702 +0000 UTC m=+5546.058199969" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.692975 4931 generic.go:334] "Generic (PLEG): container finished" podID="c9b04495-2e29-4188-adbe-e6ed3669c25a" containerID="7956c67be4873a213d6ce531a234902ab8e420fef8d81bd2f9cc50a55b2ed19e" exitCode=0 Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.693058 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c9b04495-2e29-4188-adbe-e6ed3669c25a","Type":"ContainerDied","Data":"7956c67be4873a213d6ce531a234902ab8e420fef8d81bd2f9cc50a55b2ed19e"} Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.693091 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"c9b04495-2e29-4188-adbe-e6ed3669c25a","Type":"ContainerDied","Data":"c53e952c29d0f7bb7753df2ecd373b270a2a034437bebd33a1a8707e3ab33ea8"} Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.693105 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53e952c29d0f7bb7753df2ecd373b270a2a034437bebd33a1a8707e3ab33ea8" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.746949 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.839370 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt4r9\" (UniqueName: \"kubernetes.io/projected/c9b04495-2e29-4188-adbe-e6ed3669c25a-kube-api-access-tt4r9\") pod \"c9b04495-2e29-4188-adbe-e6ed3669c25a\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.839654 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-combined-ca-bundle\") pod \"c9b04495-2e29-4188-adbe-e6ed3669c25a\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.839687 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-config-data\") pod \"c9b04495-2e29-4188-adbe-e6ed3669c25a\" (UID: \"c9b04495-2e29-4188-adbe-e6ed3669c25a\") " Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.843619 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b04495-2e29-4188-adbe-e6ed3669c25a-kube-api-access-tt4r9" (OuterVolumeSpecName: "kube-api-access-tt4r9") pod "c9b04495-2e29-4188-adbe-e6ed3669c25a" (UID: "c9b04495-2e29-4188-adbe-e6ed3669c25a"). InnerVolumeSpecName "kube-api-access-tt4r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.867213 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-config-data" (OuterVolumeSpecName: "config-data") pod "c9b04495-2e29-4188-adbe-e6ed3669c25a" (UID: "c9b04495-2e29-4188-adbe-e6ed3669c25a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.868008 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9b04495-2e29-4188-adbe-e6ed3669c25a" (UID: "c9b04495-2e29-4188-adbe-e6ed3669c25a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.885034 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": read tcp 10.217.0.2:44348->10.217.1.76:8774: read: connection reset by peer" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.885118 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": read tcp 10.217.0.2:44340->10.217.1.76:8774: read: connection reset by peer" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.908793 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": read tcp 10.217.0.2:48292->10.217.1.75:8775: read: connection reset by peer" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.909032 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": read tcp 10.217.0.2:48286->10.217.1.75:8775: read: connection reset by peer" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.943319 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt4r9\" (UniqueName: \"kubernetes.io/projected/c9b04495-2e29-4188-adbe-e6ed3669c25a-kube-api-access-tt4r9\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.943367 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:10 crc kubenswrapper[4931]: I0130 06:40:10.943381 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b04495-2e29-4188-adbe-e6ed3669c25a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.219491 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.245493 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chqpp\" (UniqueName: \"kubernetes.io/projected/2530f454-5ee2-4767-8c0b-75d50ba8a44b-kube-api-access-chqpp\") pod \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.245558 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-combined-ca-bundle\") pod \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.245586 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-config-data\") pod \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.245653 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2530f454-5ee2-4767-8c0b-75d50ba8a44b-logs\") pod \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\" (UID: \"2530f454-5ee2-4767-8c0b-75d50ba8a44b\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.246278 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2530f454-5ee2-4767-8c0b-75d50ba8a44b-logs" (OuterVolumeSpecName: "logs") pod "2530f454-5ee2-4767-8c0b-75d50ba8a44b" (UID: "2530f454-5ee2-4767-8c0b-75d50ba8a44b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.282952 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2530f454-5ee2-4767-8c0b-75d50ba8a44b-kube-api-access-chqpp" (OuterVolumeSpecName: "kube-api-access-chqpp") pod "2530f454-5ee2-4767-8c0b-75d50ba8a44b" (UID: "2530f454-5ee2-4767-8c0b-75d50ba8a44b"). InnerVolumeSpecName "kube-api-access-chqpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.300728 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2530f454-5ee2-4767-8c0b-75d50ba8a44b" (UID: "2530f454-5ee2-4767-8c0b-75d50ba8a44b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.307457 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-config-data" (OuterVolumeSpecName: "config-data") pod "2530f454-5ee2-4767-8c0b-75d50ba8a44b" (UID: "2530f454-5ee2-4767-8c0b-75d50ba8a44b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.347206 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chqpp\" (UniqueName: \"kubernetes.io/projected/2530f454-5ee2-4767-8c0b-75d50ba8a44b-kube-api-access-chqpp\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.347244 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.347254 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2530f454-5ee2-4767-8c0b-75d50ba8a44b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.347262 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2530f454-5ee2-4767-8c0b-75d50ba8a44b-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.446451 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cc67f7b-ac8c-4b63-8f28-fd5135307022" path="/var/lib/kubelet/pods/0cc67f7b-ac8c-4b63-8f28-fd5135307022/volumes" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.466016 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.655115 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-combined-ca-bundle\") pod \"4c56d95d-5087-41db-a759-2273aef32a3c\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.655586 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-config-data\") pod \"4c56d95d-5087-41db-a759-2273aef32a3c\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.655709 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c56d95d-5087-41db-a759-2273aef32a3c-logs\") pod \"4c56d95d-5087-41db-a759-2273aef32a3c\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.655764 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcgzf\" (UniqueName: \"kubernetes.io/projected/4c56d95d-5087-41db-a759-2273aef32a3c-kube-api-access-wcgzf\") pod \"4c56d95d-5087-41db-a759-2273aef32a3c\" (UID: \"4c56d95d-5087-41db-a759-2273aef32a3c\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.658603 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c56d95d-5087-41db-a759-2273aef32a3c-logs" (OuterVolumeSpecName: "logs") pod "4c56d95d-5087-41db-a759-2273aef32a3c" (UID: "4c56d95d-5087-41db-a759-2273aef32a3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.671706 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c56d95d-5087-41db-a759-2273aef32a3c-kube-api-access-wcgzf" (OuterVolumeSpecName: "kube-api-access-wcgzf") pod "4c56d95d-5087-41db-a759-2273aef32a3c" (UID: "4c56d95d-5087-41db-a759-2273aef32a3c"). InnerVolumeSpecName "kube-api-access-wcgzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.682678 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c56d95d-5087-41db-a759-2273aef32a3c" (UID: "4c56d95d-5087-41db-a759-2273aef32a3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.727582 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-config-data" (OuterVolumeSpecName: "config-data") pod "4c56d95d-5087-41db-a759-2273aef32a3c" (UID: "4c56d95d-5087-41db-a759-2273aef32a3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.727727 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee8c9751-e3b5-4031-bb46-a7e5fae46f4e","Type":"ContainerStarted","Data":"e691c1952efaf391dd8add1402f864ca4904050c83ad6a836531fb2e18c4d1da"} Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.746357 4931 generic.go:334] "Generic (PLEG): container finished" podID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerID="b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155" exitCode=0 Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.746451 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2530f454-5ee2-4767-8c0b-75d50ba8a44b","Type":"ContainerDied","Data":"b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155"} Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.746473 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2530f454-5ee2-4767-8c0b-75d50ba8a44b","Type":"ContainerDied","Data":"aef8e55688d59f4d24924520cc3d195a0fe2137c7ba9584c52f87f8f22ad3859"} Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.746490 4931 scope.go:117] "RemoveContainer" containerID="b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.746597 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.747065 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.747052541 podStartE2EDuration="2.747052541s" podCreationTimestamp="2026-01-30 06:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:11.73916438 +0000 UTC m=+5547.109074637" watchObservedRunningTime="2026-01-30 06:40:11.747052541 +0000 UTC m=+5547.116962808" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.759550 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.759594 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c56d95d-5087-41db-a759-2273aef32a3c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.759607 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcgzf\" (UniqueName: \"kubernetes.io/projected/4c56d95d-5087-41db-a759-2273aef32a3c-kube-api-access-wcgzf\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.759619 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c56d95d-5087-41db-a759-2273aef32a3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.763373 4931 generic.go:334] "Generic (PLEG): container finished" podID="4c56d95d-5087-41db-a759-2273aef32a3c" containerID="7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a" exitCode=0 Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.763468 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c56d95d-5087-41db-a759-2273aef32a3c","Type":"ContainerDied","Data":"7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a"} Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.763508 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c56d95d-5087-41db-a759-2273aef32a3c","Type":"ContainerDied","Data":"927c72bfe637ff62dfcafae17c76ddbeded161c27b84f0c5f70d67241dbe6fdd"} Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.763587 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.772214 4931 generic.go:334] "Generic (PLEG): container finished" podID="2e99598c-cc27-462b-8c5b-9647fdc031dc" containerID="a67c2af59ce774fac5d99d16e2c4d0308297f692031a8a81e468f5bef97702ac" exitCode=0 Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.772332 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.773968 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e99598c-cc27-462b-8c5b-9647fdc031dc","Type":"ContainerDied","Data":"a67c2af59ce774fac5d99d16e2c4d0308297f692031a8a81e468f5bef97702ac"} Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.774041 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e99598c-cc27-462b-8c5b-9647fdc031dc","Type":"ContainerDied","Data":"7484fc206458a3c7c0f0725319e96b32501a736307f2141c6d77f6213f261ff9"} Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.774060 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7484fc206458a3c7c0f0725319e96b32501a736307f2141c6d77f6213f261ff9" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.779238 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.805955 4931 scope.go:117] "RemoveContainer" containerID="d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.816728 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.843186 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.846307 4931 scope.go:117] "RemoveContainer" containerID="b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.848300 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155\": container with ID starting with b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155 not found: ID does not exist" containerID="b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.848364 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155"} err="failed to get container status \"b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155\": rpc error: code = NotFound desc = could not find container \"b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155\": container with ID starting with b0f9685dc882224c0d513d40000ac1e1655fbf5c7112da5e2f6f7f5866677155 not found: ID does not exist" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.848385 4931 scope.go:117] "RemoveContainer" containerID="d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.849807 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc\": container with ID starting with d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc not found: ID does not exist" containerID="d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.849853 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc"} err="failed to get container status \"d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc\": rpc error: code = NotFound desc = could not find container \"d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc\": container with ID starting with d792fe84be06f29a0f9a0f35ef73368f3a67a29ef7cf6174c301187eb913f1bc not found: ID does not exist" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.849869 4931 scope.go:117] "RemoveContainer" containerID="7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.855385 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874073 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.874432 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b04495-2e29-4188-adbe-e6ed3669c25a" containerName="nova-cell1-conductor-conductor" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874448 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b04495-2e29-4188-adbe-e6ed3669c25a" containerName="nova-cell1-conductor-conductor" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.874477 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-log" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874486 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-log" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.874496 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-metadata" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874502 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-metadata" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.874512 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-api" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874518 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-api" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.874526 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-log" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874533 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-log" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.874541 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e99598c-cc27-462b-8c5b-9647fdc031dc" containerName="nova-cell0-conductor-conductor" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874546 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e99598c-cc27-462b-8c5b-9647fdc031dc" containerName="nova-cell0-conductor-conductor" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874700 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-log" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874712 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-log" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874722 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" containerName="nova-metadata-metadata" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874735 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e99598c-cc27-462b-8c5b-9647fdc031dc" containerName="nova-cell0-conductor-conductor" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874743 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" containerName="nova-api-api" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.874752 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b04495-2e29-4188-adbe-e6ed3669c25a" containerName="nova-cell1-conductor-conductor" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.875590 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.880465 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.903917 4931 scope.go:117] "RemoveContainer" containerID="a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.904328 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.916902 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.927611 4931 scope.go:117] "RemoveContainer" containerID="7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a" Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.928128 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a\": container with ID starting with 7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a not found: ID does not exist" containerID="7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.928169 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a"} err="failed to get container status \"7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a\": rpc error: code = NotFound desc = could not find container \"7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a\": container with ID starting with 7ba088cb21d714f69636911e519755523520f6a3c24d7bc8ee678e814ba0634a not found: ID does not exist" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.928192 4931 scope.go:117] "RemoveContainer" containerID="a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.929193 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: E0130 06:40:11.929799 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd\": container with ID starting with a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd not found: ID does not exist" containerID="a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.929827 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd"} err="failed to get container status \"a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd\": rpc error: code = NotFound desc = could not find container \"a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd\": container with ID starting with a9c10b94942609fec99a9c8583d382387968d89a6e00090015b7a098ed4bbbfd not found: ID does not exist" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.930373 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.932271 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.957173 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.962358 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.962954 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-combined-ca-bundle\") pod \"2e99598c-cc27-462b-8c5b-9647fdc031dc\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.963108 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-config-data\") pod \"2e99598c-cc27-462b-8c5b-9647fdc031dc\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.963147 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w9bw\" (UniqueName: \"kubernetes.io/projected/2e99598c-cc27-462b-8c5b-9647fdc031dc-kube-api-access-7w9bw\") pod \"2e99598c-cc27-462b-8c5b-9647fdc031dc\" (UID: \"2e99598c-cc27-462b-8c5b-9647fdc031dc\") " Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.968914 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.970747 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e99598c-cc27-462b-8c5b-9647fdc031dc-kube-api-access-7w9bw" (OuterVolumeSpecName: "kube-api-access-7w9bw") pod "2e99598c-cc27-462b-8c5b-9647fdc031dc" (UID: "2e99598c-cc27-462b-8c5b-9647fdc031dc"). InnerVolumeSpecName "kube-api-access-7w9bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.976635 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.978216 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:40:11 crc kubenswrapper[4931]: I0130 06:40:11.981844 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:11.997753 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.005545 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-config-data" (OuterVolumeSpecName: "config-data") pod "2e99598c-cc27-462b-8c5b-9647fdc031dc" (UID: "2e99598c-cc27-462b-8c5b-9647fdc031dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.036961 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e99598c-cc27-462b-8c5b-9647fdc031dc" (UID: "2e99598c-cc27-462b-8c5b-9647fdc031dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.064872 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d940c452-f401-4c40-accd-cb3178bc0490-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.064914 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12d7d7a-0b33-425e-98be-5a28ef924b22-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.064948 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d940c452-f401-4c40-accd-cb3178bc0490-logs\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.064974 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4lxm\" (UniqueName: \"kubernetes.io/projected/d940c452-f401-4c40-accd-cb3178bc0490-kube-api-access-h4lxm\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.065094 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12d7d7a-0b33-425e-98be-5a28ef924b22-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.065142 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75245\" (UniqueName: \"kubernetes.io/projected/a12d7d7a-0b33-425e-98be-5a28ef924b22-kube-api-access-75245\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.065174 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d940c452-f401-4c40-accd-cb3178bc0490-config-data\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.065379 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.065416 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e99598c-cc27-462b-8c5b-9647fdc031dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.065455 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w9bw\" (UniqueName: \"kubernetes.io/projected/2e99598c-cc27-462b-8c5b-9647fdc031dc-kube-api-access-7w9bw\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166364 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12d7d7a-0b33-425e-98be-5a28ef924b22-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166473 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36294ba3-fcdd-45cd-b4ff-20ee280751da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166498 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75245\" (UniqueName: \"kubernetes.io/projected/a12d7d7a-0b33-425e-98be-5a28ef924b22-kube-api-access-75245\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166515 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36294ba3-fcdd-45cd-b4ff-20ee280751da-logs\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166551 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d940c452-f401-4c40-accd-cb3178bc0490-config-data\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166656 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d940c452-f401-4c40-accd-cb3178bc0490-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166705 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12d7d7a-0b33-425e-98be-5a28ef924b22-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166766 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d940c452-f401-4c40-accd-cb3178bc0490-logs\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166813 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4lxm\" (UniqueName: \"kubernetes.io/projected/d940c452-f401-4c40-accd-cb3178bc0490-kube-api-access-h4lxm\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.166885 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwlhm\" (UniqueName: \"kubernetes.io/projected/36294ba3-fcdd-45cd-b4ff-20ee280751da-kube-api-access-kwlhm\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.167028 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36294ba3-fcdd-45cd-b4ff-20ee280751da-config-data\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.167258 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d940c452-f401-4c40-accd-cb3178bc0490-logs\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.170770 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d940c452-f401-4c40-accd-cb3178bc0490-config-data\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.171027 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12d7d7a-0b33-425e-98be-5a28ef924b22-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.172140 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d940c452-f401-4c40-accd-cb3178bc0490-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.179037 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12d7d7a-0b33-425e-98be-5a28ef924b22-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.200780 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4lxm\" (UniqueName: \"kubernetes.io/projected/d940c452-f401-4c40-accd-cb3178bc0490-kube-api-access-h4lxm\") pod \"nova-api-0\" (UID: \"d940c452-f401-4c40-accd-cb3178bc0490\") " pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.205818 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75245\" (UniqueName: \"kubernetes.io/projected/a12d7d7a-0b33-425e-98be-5a28ef924b22-kube-api-access-75245\") pod \"nova-cell1-conductor-0\" (UID: \"a12d7d7a-0b33-425e-98be-5a28ef924b22\") " pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.210681 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.257936 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.269061 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36294ba3-fcdd-45cd-b4ff-20ee280751da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.269404 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36294ba3-fcdd-45cd-b4ff-20ee280751da-logs\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.269617 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwlhm\" (UniqueName: \"kubernetes.io/projected/36294ba3-fcdd-45cd-b4ff-20ee280751da-kube-api-access-kwlhm\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.269736 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36294ba3-fcdd-45cd-b4ff-20ee280751da-config-data\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.269877 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36294ba3-fcdd-45cd-b4ff-20ee280751da-logs\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.273924 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36294ba3-fcdd-45cd-b4ff-20ee280751da-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.276163 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36294ba3-fcdd-45cd-b4ff-20ee280751da-config-data\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.286984 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwlhm\" (UniqueName: \"kubernetes.io/projected/36294ba3-fcdd-45cd-b4ff-20ee280751da-kube-api-access-kwlhm\") pod \"nova-metadata-0\" (UID: \"36294ba3-fcdd-45cd-b4ff-20ee280751da\") " pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.336774 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.750980 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.789894 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.790090 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d940c452-f401-4c40-accd-cb3178bc0490","Type":"ContainerStarted","Data":"6495728d5eb291bc298003d3849c7a2291b20db607092e67a996db42ddd92122"} Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.876666 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.896869 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.909146 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.926492 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.928136 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.936905 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.939714 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:40:12 crc kubenswrapper[4931]: W0130 06:40:12.944068 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36294ba3_fcdd_45cd_b4ff_20ee280751da.slice/crio-0de1279c14e6faaac345c37b480ef12bead176ff1e3f16d3852d6ef4545ce2e3 WatchSource:0}: Error finding container 0de1279c14e6faaac345c37b480ef12bead176ff1e3f16d3852d6ef4545ce2e3: Status 404 returned error can't find the container with id 0de1279c14e6faaac345c37b480ef12bead176ff1e3f16d3852d6ef4545ce2e3 Jan 30 06:40:12 crc kubenswrapper[4931]: I0130 06:40:12.959284 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.084841 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79469af6-a764-49c6-beaf-b49185c1028a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.084892 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79469af6-a764-49c6-beaf-b49185c1028a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.084940 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwjc\" (UniqueName: \"kubernetes.io/projected/79469af6-a764-49c6-beaf-b49185c1028a-kube-api-access-wxwjc\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.186172 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79469af6-a764-49c6-beaf-b49185c1028a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.186528 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79469af6-a764-49c6-beaf-b49185c1028a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.186589 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwjc\" (UniqueName: \"kubernetes.io/projected/79469af6-a764-49c6-beaf-b49185c1028a-kube-api-access-wxwjc\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.190216 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79469af6-a764-49c6-beaf-b49185c1028a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.190279 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79469af6-a764-49c6-beaf-b49185c1028a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.206859 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwjc\" (UniqueName: \"kubernetes.io/projected/79469af6-a764-49c6-beaf-b49185c1028a-kube-api-access-wxwjc\") pod \"nova-cell0-conductor-0\" (UID: \"79469af6-a764-49c6-beaf-b49185c1028a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.278123 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.433135 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2530f454-5ee2-4767-8c0b-75d50ba8a44b" path="/var/lib/kubelet/pods/2530f454-5ee2-4767-8c0b-75d50ba8a44b/volumes" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.433815 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e99598c-cc27-462b-8c5b-9647fdc031dc" path="/var/lib/kubelet/pods/2e99598c-cc27-462b-8c5b-9647fdc031dc/volumes" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.434542 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c56d95d-5087-41db-a759-2273aef32a3c" path="/var/lib/kubelet/pods/4c56d95d-5087-41db-a759-2273aef32a3c/volumes" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.435807 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b04495-2e29-4188-adbe-e6ed3669c25a" path="/var/lib/kubelet/pods/c9b04495-2e29-4188-adbe-e6ed3669c25a/volumes" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.726216 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 06:40:13 crc kubenswrapper[4931]: W0130 06:40:13.727632 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79469af6_a764_49c6_beaf_b49185c1028a.slice/crio-7e510364c79ed8c922a3e902159f23b4f7230095aca34552eca7cbe39662356b WatchSource:0}: Error finding container 7e510364c79ed8c922a3e902159f23b4f7230095aca34552eca7cbe39662356b: Status 404 returned error can't find the container with id 7e510364c79ed8c922a3e902159f23b4f7230095aca34552eca7cbe39662356b Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.802248 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"79469af6-a764-49c6-beaf-b49185c1028a","Type":"ContainerStarted","Data":"7e510364c79ed8c922a3e902159f23b4f7230095aca34552eca7cbe39662356b"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.803523 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a12d7d7a-0b33-425e-98be-5a28ef924b22","Type":"ContainerStarted","Data":"9c3841e76949c898f5084dfbae3848ae947bb2904d6e1524763919edb31359be"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.803555 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a12d7d7a-0b33-425e-98be-5a28ef924b22","Type":"ContainerStarted","Data":"6f216ac671f7c21db7f3cdf84661e810eae62c7e4d3d98ff0ef3c27955f3208a"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.803768 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.807252 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d940c452-f401-4c40-accd-cb3178bc0490","Type":"ContainerStarted","Data":"cfb777df56a8a7c5b99556e60a375cf7ef57707229568ce73495832b458767a3"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.807278 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d940c452-f401-4c40-accd-cb3178bc0490","Type":"ContainerStarted","Data":"618a6fdd12b73afd5214a120da54b9f11515ef3e4da9fd5f154fcdc8715b32b5"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.809260 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36294ba3-fcdd-45cd-b4ff-20ee280751da","Type":"ContainerStarted","Data":"4ce125804c1146334c7ecfb1ec53114a5b004f6125b34f0141167c6e3dc499b6"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.809301 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36294ba3-fcdd-45cd-b4ff-20ee280751da","Type":"ContainerStarted","Data":"8bd0d94b0dcb72a0c45c8d080998a74a848a0d84e80c7f099ef2e81819c37caf"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.809311 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36294ba3-fcdd-45cd-b4ff-20ee280751da","Type":"ContainerStarted","Data":"0de1279c14e6faaac345c37b480ef12bead176ff1e3f16d3852d6ef4545ce2e3"} Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.829791 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.829769508 podStartE2EDuration="2.829769508s" podCreationTimestamp="2026-01-30 06:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:13.823741709 +0000 UTC m=+5549.193651966" watchObservedRunningTime="2026-01-30 06:40:13.829769508 +0000 UTC m=+5549.199679765" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.844118 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.844089018 podStartE2EDuration="2.844089018s" podCreationTimestamp="2026-01-30 06:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:13.83879341 +0000 UTC m=+5549.208703657" watchObservedRunningTime="2026-01-30 06:40:13.844089018 +0000 UTC m=+5549.213999275" Jan 30 06:40:13 crc kubenswrapper[4931]: I0130 06:40:13.857641 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.857624807 podStartE2EDuration="2.857624807s" podCreationTimestamp="2026-01-30 06:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:13.856265609 +0000 UTC m=+5549.226175886" watchObservedRunningTime="2026-01-30 06:40:13.857624807 +0000 UTC m=+5549.227535064" Jan 30 06:40:14 crc kubenswrapper[4931]: I0130 06:40:14.322834 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:14 crc kubenswrapper[4931]: I0130 06:40:14.818603 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"79469af6-a764-49c6-beaf-b49185c1028a","Type":"ContainerStarted","Data":"e907de379cac71ffd9eab7b864bb9ea7f16af34f0a9aa65c9d273acd302937e7"} Jan 30 06:40:14 crc kubenswrapper[4931]: I0130 06:40:14.840269 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.840249516 podStartE2EDuration="2.840249516s" podCreationTimestamp="2026-01-30 06:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:14.831126891 +0000 UTC m=+5550.201037158" watchObservedRunningTime="2026-01-30 06:40:14.840249516 +0000 UTC m=+5550.210159773" Jan 30 06:40:15 crc kubenswrapper[4931]: I0130 06:40:15.017452 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 06:40:15 crc kubenswrapper[4931]: I0130 06:40:15.829472 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:17 crc kubenswrapper[4931]: I0130 06:40:17.337920 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:40:17 crc kubenswrapper[4931]: I0130 06:40:17.339188 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 06:40:18 crc kubenswrapper[4931]: I0130 06:40:18.310675 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 06:40:19 crc kubenswrapper[4931]: I0130 06:40:19.322281 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:19 crc kubenswrapper[4931]: I0130 06:40:19.336455 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:19 crc kubenswrapper[4931]: I0130 06:40:19.884136 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 06:40:20 crc kubenswrapper[4931]: I0130 06:40:20.016983 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 06:40:20 crc kubenswrapper[4931]: I0130 06:40:20.049554 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 06:40:20 crc kubenswrapper[4931]: I0130 06:40:20.930736 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 06:40:22 crc kubenswrapper[4931]: I0130 06:40:22.211082 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 06:40:22 crc kubenswrapper[4931]: I0130 06:40:22.211136 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 06:40:22 crc kubenswrapper[4931]: I0130 06:40:22.286818 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 06:40:22 crc kubenswrapper[4931]: I0130 06:40:22.338234 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 06:40:22 crc kubenswrapper[4931]: I0130 06:40:22.338293 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 06:40:23 crc kubenswrapper[4931]: I0130 06:40:23.293593 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d940c452-f401-4c40-accd-cb3178bc0490" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:40:23 crc kubenswrapper[4931]: I0130 06:40:23.293661 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d940c452-f401-4c40-accd-cb3178bc0490" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:40:23 crc kubenswrapper[4931]: I0130 06:40:23.420608 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="36294ba3-fcdd-45cd-b4ff-20ee280751da" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.88:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:40:23 crc kubenswrapper[4931]: I0130 06:40:23.421011 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="36294ba3-fcdd-45cd-b4ff-20ee280751da" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.88:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.604952 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.607789 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.610523 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.628695 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.783204 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.783271 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.783379 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-scripts\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.783464 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2007a5f0-e092-4e2d-b41b-a32d073affcb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.783559 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.783600 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4smft\" (UniqueName: \"kubernetes.io/projected/2007a5f0-e092-4e2d-b41b-a32d073affcb-kube-api-access-4smft\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.884625 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.884664 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.884736 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-scripts\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.885473 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2007a5f0-e092-4e2d-b41b-a32d073affcb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.885551 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.885579 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4smft\" (UniqueName: \"kubernetes.io/projected/2007a5f0-e092-4e2d-b41b-a32d073affcb-kube-api-access-4smft\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.885729 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2007a5f0-e092-4e2d-b41b-a32d073affcb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.889912 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.890553 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.891621 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.906989 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-scripts\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.922948 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4smft\" (UniqueName: \"kubernetes.io/projected/2007a5f0-e092-4e2d-b41b-a32d073affcb-kube-api-access-4smft\") pod \"cinder-scheduler-0\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:26 crc kubenswrapper[4931]: I0130 06:40:26.943800 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 06:40:27 crc kubenswrapper[4931]: W0130 06:40:27.475562 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2007a5f0_e092_4e2d_b41b_a32d073affcb.slice/crio-d15afda74e3554cef98d12203f17762f433838a8050f7e4bd23eddab3a91800b WatchSource:0}: Error finding container d15afda74e3554cef98d12203f17762f433838a8050f7e4bd23eddab3a91800b: Status 404 returned error can't find the container with id d15afda74e3554cef98d12203f17762f433838a8050f7e4bd23eddab3a91800b Jan 30 06:40:27 crc kubenswrapper[4931]: I0130 06:40:27.479127 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:27 crc kubenswrapper[4931]: I0130 06:40:27.956610 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2007a5f0-e092-4e2d-b41b-a32d073affcb","Type":"ContainerStarted","Data":"d15afda74e3554cef98d12203f17762f433838a8050f7e4bd23eddab3a91800b"} Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.163662 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.163899 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api-log" containerID="cri-o://63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a" gracePeriod=30 Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.164299 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api" containerID="cri-o://fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71" gracePeriod=30 Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.865622 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.867330 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.869289 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.890388 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.967621 4931 generic.go:334] "Generic (PLEG): container finished" podID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerID="63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a" exitCode=143 Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.967689 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"91d301dc-5f68-4e1b-ae27-51aa02e45789","Type":"ContainerDied","Data":"63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a"} Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.970444 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2007a5f0-e092-4e2d-b41b-a32d073affcb","Type":"ContainerStarted","Data":"804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7"} Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.970489 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2007a5f0-e092-4e2d-b41b-a32d073affcb","Type":"ContainerStarted","Data":"a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341"} Jan 30 06:40:28 crc kubenswrapper[4931]: I0130 06:40:28.995945 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.995919395 podStartE2EDuration="2.995919395s" podCreationTimestamp="2026-01-30 06:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:28.99143658 +0000 UTC m=+5564.361346857" watchObservedRunningTime="2026-01-30 06:40:28.995919395 +0000 UTC m=+5564.365829652" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022295 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-run\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022354 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022411 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022459 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022504 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022530 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022567 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022591 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022613 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022654 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022684 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022756 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbzl5\" (UniqueName: \"kubernetes.io/projected/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-kube-api-access-sbzl5\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.022884 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.023012 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.023067 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124367 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124481 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124501 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124530 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-run\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124551 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124545 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124598 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124628 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124657 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124677 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124761 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124812 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124828 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.124953 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125102 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125176 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125537 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125569 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125610 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125634 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125688 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125733 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125753 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125800 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.125866 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbzl5\" (UniqueName: \"kubernetes.io/projected/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-kube-api-access-sbzl5\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.126087 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-run\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.130239 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.130596 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.130943 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.132094 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.144594 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.155932 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbzl5\" (UniqueName: \"kubernetes.io/projected/6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc-kube-api-access-sbzl5\") pod \"cinder-volume-volume1-0\" (UID: \"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc\") " pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.182865 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.757280 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 30 06:40:29 crc kubenswrapper[4931]: W0130 06:40:29.772043 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ac5ad1d_f9ee_4fe6_8625_d30e49c099fc.slice/crio-5642d24f487a3fbd2fcb0926ffcb8b2a69a6c03c5d478f10c7fb2b597fadb0fe WatchSource:0}: Error finding container 5642d24f487a3fbd2fcb0926ffcb8b2a69a6c03c5d478f10c7fb2b597fadb0fe: Status 404 returned error can't find the container with id 5642d24f487a3fbd2fcb0926ffcb8b2a69a6c03c5d478f10c7fb2b597fadb0fe Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.787383 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.790094 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.794847 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.809280 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941069 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-scripts\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941133 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941159 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54m9w\" (UniqueName: \"kubernetes.io/projected/9be12b3c-c79f-4719-ab10-e3370519fbe3-kube-api-access-54m9w\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941208 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-config-data\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941265 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941283 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941324 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9be12b3c-c79f-4719-ab10-e3370519fbe3-ceph\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941346 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941367 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-sys\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941388 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941463 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-lib-modules\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941506 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-run\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941534 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941553 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-dev\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941621 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.941643 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:29 crc kubenswrapper[4931]: I0130 06:40:29.979771 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc","Type":"ContainerStarted","Data":"5642d24f487a3fbd2fcb0926ffcb8b2a69a6c03c5d478f10c7fb2b597fadb0fe"} Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.042911 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-scripts\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043141 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043247 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54m9w\" (UniqueName: \"kubernetes.io/projected/9be12b3c-c79f-4719-ab10-e3370519fbe3-kube-api-access-54m9w\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043350 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-config-data\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043260 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043443 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043551 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043724 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9be12b3c-c79f-4719-ab10-e3370519fbe3-ceph\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043741 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043767 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043868 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-sys\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043887 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-sys\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.043899 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044048 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-lib-modules\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044136 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-run\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044161 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-lib-modules\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044218 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044190 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044246 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-run\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044291 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-dev\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044259 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-dev\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044387 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044450 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044540 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.044645 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9be12b3c-c79f-4719-ab10-e3370519fbe3-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.048779 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9be12b3c-c79f-4719-ab10-e3370519fbe3-ceph\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.049514 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-config-data\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.051996 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.052253 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.059980 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be12b3c-c79f-4719-ab10-e3370519fbe3-scripts\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.072493 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54m9w\" (UniqueName: \"kubernetes.io/projected/9be12b3c-c79f-4719-ab10-e3370519fbe3-kube-api-access-54m9w\") pod \"cinder-backup-0\" (UID: \"9be12b3c-c79f-4719-ab10-e3370519fbe3\") " pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.141708 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.736402 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 30 06:40:30 crc kubenswrapper[4931]: I0130 06:40:30.994701 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9be12b3c-c79f-4719-ab10-e3370519fbe3","Type":"ContainerStarted","Data":"9e6db50e631e8f7cd1c4419c8dc147e9cd931bd6dde3cc6f4e72b4f794f82ae7"} Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.419941 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.83:8776/healthcheck\": dial tcp 10.217.1.83:8776: connect: connection refused" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.786132 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881030 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data-custom\") pod \"91d301dc-5f68-4e1b-ae27-51aa02e45789\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881107 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-combined-ca-bundle\") pod \"91d301dc-5f68-4e1b-ae27-51aa02e45789\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881166 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d301dc-5f68-4e1b-ae27-51aa02e45789-logs\") pod \"91d301dc-5f68-4e1b-ae27-51aa02e45789\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881272 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91d301dc-5f68-4e1b-ae27-51aa02e45789-etc-machine-id\") pod \"91d301dc-5f68-4e1b-ae27-51aa02e45789\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881301 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv5dr\" (UniqueName: \"kubernetes.io/projected/91d301dc-5f68-4e1b-ae27-51aa02e45789-kube-api-access-lv5dr\") pod \"91d301dc-5f68-4e1b-ae27-51aa02e45789\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881342 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data\") pod \"91d301dc-5f68-4e1b-ae27-51aa02e45789\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881369 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-scripts\") pod \"91d301dc-5f68-4e1b-ae27-51aa02e45789\" (UID: \"91d301dc-5f68-4e1b-ae27-51aa02e45789\") " Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881712 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91d301dc-5f68-4e1b-ae27-51aa02e45789-logs" (OuterVolumeSpecName: "logs") pod "91d301dc-5f68-4e1b-ae27-51aa02e45789" (UID: "91d301dc-5f68-4e1b-ae27-51aa02e45789"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.881814 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91d301dc-5f68-4e1b-ae27-51aa02e45789-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.882504 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91d301dc-5f68-4e1b-ae27-51aa02e45789-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "91d301dc-5f68-4e1b-ae27-51aa02e45789" (UID: "91d301dc-5f68-4e1b-ae27-51aa02e45789"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.886000 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "91d301dc-5f68-4e1b-ae27-51aa02e45789" (UID: "91d301dc-5f68-4e1b-ae27-51aa02e45789"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.887776 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d301dc-5f68-4e1b-ae27-51aa02e45789-kube-api-access-lv5dr" (OuterVolumeSpecName: "kube-api-access-lv5dr") pod "91d301dc-5f68-4e1b-ae27-51aa02e45789" (UID: "91d301dc-5f68-4e1b-ae27-51aa02e45789"). InnerVolumeSpecName "kube-api-access-lv5dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.890071 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-scripts" (OuterVolumeSpecName: "scripts") pod "91d301dc-5f68-4e1b-ae27-51aa02e45789" (UID: "91d301dc-5f68-4e1b-ae27-51aa02e45789"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.938670 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91d301dc-5f68-4e1b-ae27-51aa02e45789" (UID: "91d301dc-5f68-4e1b-ae27-51aa02e45789"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.945228 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.967469 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data" (OuterVolumeSpecName: "config-data") pod "91d301dc-5f68-4e1b-ae27-51aa02e45789" (UID: "91d301dc-5f68-4e1b-ae27-51aa02e45789"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.983358 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/91d301dc-5f68-4e1b-ae27-51aa02e45789-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.983498 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv5dr\" (UniqueName: \"kubernetes.io/projected/91d301dc-5f68-4e1b-ae27-51aa02e45789-kube-api-access-lv5dr\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.983515 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.983525 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.983538 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:31 crc kubenswrapper[4931]: I0130 06:40:31.983550 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d301dc-5f68-4e1b-ae27-51aa02e45789-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.007092 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc","Type":"ContainerStarted","Data":"b063f0ec733a75c515c3444f9995fcc2e2f8271849b0bef3378fd8d8b5836815"} Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.007143 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc","Type":"ContainerStarted","Data":"41ced069b41fadb731c4969dff4b70199edbd0b789a0831d47ae3305decc53b2"} Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.011252 4931 generic.go:334] "Generic (PLEG): container finished" podID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerID="fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71" exitCode=0 Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.011316 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"91d301dc-5f68-4e1b-ae27-51aa02e45789","Type":"ContainerDied","Data":"fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71"} Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.011381 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"91d301dc-5f68-4e1b-ae27-51aa02e45789","Type":"ContainerDied","Data":"845bdc0d8784551cc86053ca285aa0afc7bce0f017005659d5e194550515ea02"} Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.011404 4931 scope.go:117] "RemoveContainer" containerID="fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.011622 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.038257 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.511902227 podStartE2EDuration="4.038231268s" podCreationTimestamp="2026-01-30 06:40:28 +0000 UTC" firstStartedPulling="2026-01-30 06:40:29.774726663 +0000 UTC m=+5565.144636920" lastFinishedPulling="2026-01-30 06:40:31.301055694 +0000 UTC m=+5566.670965961" observedRunningTime="2026-01-30 06:40:32.025100491 +0000 UTC m=+5567.395010748" watchObservedRunningTime="2026-01-30 06:40:32.038231268 +0000 UTC m=+5567.408141525" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.059323 4931 scope.go:117] "RemoveContainer" containerID="63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.066246 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.078486 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.092464 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:40:32 crc kubenswrapper[4931]: E0130 06:40:32.093076 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api-log" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.093096 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api-log" Jan 30 06:40:32 crc kubenswrapper[4931]: E0130 06:40:32.093114 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.093120 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.093310 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api-log" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.093331 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" containerName="cinder-api" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.094314 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.096276 4931 scope.go:117] "RemoveContainer" containerID="fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71" Jan 30 06:40:32 crc kubenswrapper[4931]: E0130 06:40:32.098570 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71\": container with ID starting with fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71 not found: ID does not exist" containerID="fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.098608 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71"} err="failed to get container status \"fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71\": rpc error: code = NotFound desc = could not find container \"fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71\": container with ID starting with fa29d3337c1f83563ce2742082bf23d118321e1590a92b71d5c9c2ccd94e7f71 not found: ID does not exist" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.098635 4931 scope.go:117] "RemoveContainer" containerID="63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.098862 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.101165 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:40:32 crc kubenswrapper[4931]: E0130 06:40:32.102615 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a\": container with ID starting with 63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a not found: ID does not exist" containerID="63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.102656 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a"} err="failed to get container status \"63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a\": rpc error: code = NotFound desc = could not find container \"63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a\": container with ID starting with 63f1e96dead88c4b078fa71100e8b05fe69a9964932dd921f7cca354e6177c0a not found: ID does not exist" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.186543 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.186637 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-scripts\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.186678 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbzz8\" (UniqueName: \"kubernetes.io/projected/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-kube-api-access-xbzz8\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.186761 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-logs\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.186813 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.186899 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-config-data\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.186929 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-config-data-custom\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.217493 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.217570 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.218476 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.218509 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.226683 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.226807 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.288845 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-config-data-custom\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.288936 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.288983 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-scripts\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.289008 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbzz8\" (UniqueName: \"kubernetes.io/projected/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-kube-api-access-xbzz8\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.289031 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-etc-machine-id\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.289089 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-logs\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.289151 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.289268 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-config-data\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.290485 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-logs\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.301485 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-config-data\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.301854 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.302186 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-config-data-custom\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.302964 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-scripts\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.316005 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbzz8\" (UniqueName: \"kubernetes.io/projected/abe3ac27-91a6-4c8d-880c-b94ad5bd7aea-kube-api-access-xbzz8\") pod \"cinder-api-0\" (UID: \"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea\") " pod="openstack/cinder-api-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.344091 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.350631 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.354082 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 06:40:32 crc kubenswrapper[4931]: I0130 06:40:32.420238 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 06:40:33 crc kubenswrapper[4931]: I0130 06:40:33.022646 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9be12b3c-c79f-4719-ab10-e3370519fbe3","Type":"ContainerStarted","Data":"241980f49c40f95d2ccdfb5357b574f3c4bdb551b651c252c35ffe07760989a5"} Jan 30 06:40:33 crc kubenswrapper[4931]: I0130 06:40:33.023173 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 06:40:33 crc kubenswrapper[4931]: I0130 06:40:33.029476 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 06:40:33 crc kubenswrapper[4931]: I0130 06:40:33.434341 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d301dc-5f68-4e1b-ae27-51aa02e45789" path="/var/lib/kubelet/pods/91d301dc-5f68-4e1b-ae27-51aa02e45789/volumes" Jan 30 06:40:34 crc kubenswrapper[4931]: I0130 06:40:34.053899 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea","Type":"ContainerStarted","Data":"b795b23d62ea617100f52742f280a1726eb7614ebc8642bffe33eb4dd426f5a2"} Jan 30 06:40:34 crc kubenswrapper[4931]: I0130 06:40:34.053953 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea","Type":"ContainerStarted","Data":"c94b8d210b840686b1f5b5650a9d9ed09fbcc5403a55b6dbea820787dbbe4a03"} Jan 30 06:40:34 crc kubenswrapper[4931]: I0130 06:40:34.056946 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9be12b3c-c79f-4719-ab10-e3370519fbe3","Type":"ContainerStarted","Data":"dd4ba9b31d20fedfba2cc15424b62059543696bef04d4dc149131425e2fecb4f"} Jan 30 06:40:34 crc kubenswrapper[4931]: I0130 06:40:34.101916 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.307649185 podStartE2EDuration="5.10188878s" podCreationTimestamp="2026-01-30 06:40:29 +0000 UTC" firstStartedPulling="2026-01-30 06:40:30.720890614 +0000 UTC m=+5566.090800881" lastFinishedPulling="2026-01-30 06:40:32.515130219 +0000 UTC m=+5567.885040476" observedRunningTime="2026-01-30 06:40:34.095333447 +0000 UTC m=+5569.465243744" watchObservedRunningTime="2026-01-30 06:40:34.10188878 +0000 UTC m=+5569.471799067" Jan 30 06:40:34 crc kubenswrapper[4931]: I0130 06:40:34.184592 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:35 crc kubenswrapper[4931]: I0130 06:40:35.073486 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"abe3ac27-91a6-4c8d-880c-b94ad5bd7aea","Type":"ContainerStarted","Data":"342d06d8ce0fb2789d4cd5be8ca534ba878a2b478b5e7f10823efea4311de174"} Jan 30 06:40:35 crc kubenswrapper[4931]: I0130 06:40:35.075753 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 06:40:35 crc kubenswrapper[4931]: I0130 06:40:35.114236 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.114207321 podStartE2EDuration="3.114207321s" podCreationTimestamp="2026-01-30 06:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:35.099709356 +0000 UTC m=+5570.469619703" watchObservedRunningTime="2026-01-30 06:40:35.114207321 +0000 UTC m=+5570.484117618" Jan 30 06:40:35 crc kubenswrapper[4931]: I0130 06:40:35.142508 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 30 06:40:37 crc kubenswrapper[4931]: I0130 06:40:37.232248 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 06:40:37 crc kubenswrapper[4931]: I0130 06:40:37.347060 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:38 crc kubenswrapper[4931]: I0130 06:40:38.105309 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="cinder-scheduler" containerID="cri-o://a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341" gracePeriod=30 Jan 30 06:40:38 crc kubenswrapper[4931]: I0130 06:40:38.105376 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="probe" containerID="cri-o://804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7" gracePeriod=30 Jan 30 06:40:39 crc kubenswrapper[4931]: I0130 06:40:39.117562 4931 generic.go:334] "Generic (PLEG): container finished" podID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerID="804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7" exitCode=0 Jan 30 06:40:39 crc kubenswrapper[4931]: I0130 06:40:39.117636 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2007a5f0-e092-4e2d-b41b-a32d073affcb","Type":"ContainerDied","Data":"804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7"} Jan 30 06:40:39 crc kubenswrapper[4931]: I0130 06:40:39.385312 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.339606 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.659156 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.765551 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4smft\" (UniqueName: \"kubernetes.io/projected/2007a5f0-e092-4e2d-b41b-a32d073affcb-kube-api-access-4smft\") pod \"2007a5f0-e092-4e2d-b41b-a32d073affcb\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.765596 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data\") pod \"2007a5f0-e092-4e2d-b41b-a32d073affcb\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.765671 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data-custom\") pod \"2007a5f0-e092-4e2d-b41b-a32d073affcb\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.765699 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2007a5f0-e092-4e2d-b41b-a32d073affcb-etc-machine-id\") pod \"2007a5f0-e092-4e2d-b41b-a32d073affcb\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.765777 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-scripts\") pod \"2007a5f0-e092-4e2d-b41b-a32d073affcb\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.765819 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-combined-ca-bundle\") pod \"2007a5f0-e092-4e2d-b41b-a32d073affcb\" (UID: \"2007a5f0-e092-4e2d-b41b-a32d073affcb\") " Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.765997 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2007a5f0-e092-4e2d-b41b-a32d073affcb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2007a5f0-e092-4e2d-b41b-a32d073affcb" (UID: "2007a5f0-e092-4e2d-b41b-a32d073affcb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.766512 4931 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2007a5f0-e092-4e2d-b41b-a32d073affcb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.771750 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-scripts" (OuterVolumeSpecName: "scripts") pod "2007a5f0-e092-4e2d-b41b-a32d073affcb" (UID: "2007a5f0-e092-4e2d-b41b-a32d073affcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.772446 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2007a5f0-e092-4e2d-b41b-a32d073affcb" (UID: "2007a5f0-e092-4e2d-b41b-a32d073affcb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.780218 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2007a5f0-e092-4e2d-b41b-a32d073affcb-kube-api-access-4smft" (OuterVolumeSpecName: "kube-api-access-4smft") pod "2007a5f0-e092-4e2d-b41b-a32d073affcb" (UID: "2007a5f0-e092-4e2d-b41b-a32d073affcb"). InnerVolumeSpecName "kube-api-access-4smft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.819449 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2007a5f0-e092-4e2d-b41b-a32d073affcb" (UID: "2007a5f0-e092-4e2d-b41b-a32d073affcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.856494 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data" (OuterVolumeSpecName: "config-data") pod "2007a5f0-e092-4e2d-b41b-a32d073affcb" (UID: "2007a5f0-e092-4e2d-b41b-a32d073affcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.867803 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.868039 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4smft\" (UniqueName: \"kubernetes.io/projected/2007a5f0-e092-4e2d-b41b-a32d073affcb-kube-api-access-4smft\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.868052 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.868063 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:40 crc kubenswrapper[4931]: I0130 06:40:40.868073 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2007a5f0-e092-4e2d-b41b-a32d073affcb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.147677 4931 generic.go:334] "Generic (PLEG): container finished" podID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerID="a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341" exitCode=0 Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.147742 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2007a5f0-e092-4e2d-b41b-a32d073affcb","Type":"ContainerDied","Data":"a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341"} Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.147787 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2007a5f0-e092-4e2d-b41b-a32d073affcb","Type":"ContainerDied","Data":"d15afda74e3554cef98d12203f17762f433838a8050f7e4bd23eddab3a91800b"} Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.147810 4931 scope.go:117] "RemoveContainer" containerID="804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.147826 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.177801 4931 scope.go:117] "RemoveContainer" containerID="a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.201549 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.218751 4931 scope.go:117] "RemoveContainer" containerID="804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7" Jan 30 06:40:41 crc kubenswrapper[4931]: E0130 06:40:41.219174 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7\": container with ID starting with 804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7 not found: ID does not exist" containerID="804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.219214 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7"} err="failed to get container status \"804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7\": rpc error: code = NotFound desc = could not find container \"804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7\": container with ID starting with 804dabd22ea25238e25d9c6a0e8795f0865ccf0fe93f2320a7a8bd027766dca7 not found: ID does not exist" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.219241 4931 scope.go:117] "RemoveContainer" containerID="a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341" Jan 30 06:40:41 crc kubenswrapper[4931]: E0130 06:40:41.219782 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341\": container with ID starting with a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341 not found: ID does not exist" containerID="a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.219853 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341"} err="failed to get container status \"a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341\": rpc error: code = NotFound desc = could not find container \"a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341\": container with ID starting with a2c7a2fd3f1dbd78cbc3957d77cdeb8cf798121b652c6b3276edb45a82a90341 not found: ID does not exist" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.221247 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.270505 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:41 crc kubenswrapper[4931]: E0130 06:40:41.270971 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="probe" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.270988 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="probe" Jan 30 06:40:41 crc kubenswrapper[4931]: E0130 06:40:41.271003 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="cinder-scheduler" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.271009 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="cinder-scheduler" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.271197 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="cinder-scheduler" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.271213 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" containerName="probe" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.272234 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.276687 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.289177 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.377783 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-config-data\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.377921 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.378109 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-scripts\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.378177 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.378372 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41673f24-5c01-4401-839f-55da60930b4d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.378452 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trs27\" (UniqueName: \"kubernetes.io/projected/41673f24-5c01-4401-839f-55da60930b4d-kube-api-access-trs27\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.437997 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2007a5f0-e092-4e2d-b41b-a32d073affcb" path="/var/lib/kubelet/pods/2007a5f0-e092-4e2d-b41b-a32d073affcb/volumes" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.480230 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41673f24-5c01-4401-839f-55da60930b4d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.480300 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trs27\" (UniqueName: \"kubernetes.io/projected/41673f24-5c01-4401-839f-55da60930b4d-kube-api-access-trs27\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.480360 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-config-data\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.480455 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41673f24-5c01-4401-839f-55da60930b4d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.480496 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.480773 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-scripts\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.480854 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.486617 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.486765 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.488460 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-scripts\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.497043 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trs27\" (UniqueName: \"kubernetes.io/projected/41673f24-5c01-4401-839f-55da60930b4d-kube-api-access-trs27\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.503308 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41673f24-5c01-4401-839f-55da60930b4d-config-data\") pod \"cinder-scheduler-0\" (UID: \"41673f24-5c01-4401-839f-55da60930b4d\") " pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.592169 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.778095 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jlzk5"] Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.780801 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.790227 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlzk5"] Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.887953 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-utilities\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.888024 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-catalog-content\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.888093 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgkbx\" (UniqueName: \"kubernetes.io/projected/eb599a5d-5068-4fab-bf45-937582c34eca-kube-api-access-sgkbx\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.992001 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-utilities\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.992151 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-catalog-content\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.992274 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgkbx\" (UniqueName: \"kubernetes.io/projected/eb599a5d-5068-4fab-bf45-937582c34eca-kube-api-access-sgkbx\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.993218 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-utilities\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:41 crc kubenswrapper[4931]: I0130 06:40:41.993239 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-catalog-content\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:42 crc kubenswrapper[4931]: I0130 06:40:42.012054 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgkbx\" (UniqueName: \"kubernetes.io/projected/eb599a5d-5068-4fab-bf45-937582c34eca-kube-api-access-sgkbx\") pod \"redhat-operators-jlzk5\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:42 crc kubenswrapper[4931]: I0130 06:40:42.119798 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 06:40:42 crc kubenswrapper[4931]: I0130 06:40:42.124460 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:42 crc kubenswrapper[4931]: I0130 06:40:42.157502 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"41673f24-5c01-4401-839f-55da60930b4d","Type":"ContainerStarted","Data":"0bca28356d538272b347fc705dd6df1b5b2c095a93c54f361d0cddb95a6926cf"} Jan 30 06:40:42 crc kubenswrapper[4931]: I0130 06:40:42.586562 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlzk5"] Jan 30 06:40:43 crc kubenswrapper[4931]: I0130 06:40:43.170201 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"41673f24-5c01-4401-839f-55da60930b4d","Type":"ContainerStarted","Data":"5f39efd8390221c52290c8ba3f91c44523e0379119244139ebeb46bf34ec68d0"} Jan 30 06:40:43 crc kubenswrapper[4931]: I0130 06:40:43.172481 4931 generic.go:334] "Generic (PLEG): container finished" podID="eb599a5d-5068-4fab-bf45-937582c34eca" containerID="53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e" exitCode=0 Jan 30 06:40:43 crc kubenswrapper[4931]: I0130 06:40:43.172511 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlzk5" event={"ID":"eb599a5d-5068-4fab-bf45-937582c34eca","Type":"ContainerDied","Data":"53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e"} Jan 30 06:40:43 crc kubenswrapper[4931]: I0130 06:40:43.172527 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlzk5" event={"ID":"eb599a5d-5068-4fab-bf45-937582c34eca","Type":"ContainerStarted","Data":"f05b848343471061d40f87134776e9f8a1736a2776fb3239d3e410c22aaedae8"} Jan 30 06:40:44 crc kubenswrapper[4931]: I0130 06:40:44.186213 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"41673f24-5c01-4401-839f-55da60930b4d","Type":"ContainerStarted","Data":"f88c3a6367ef2f74a464be569950933828eb7608482628c57c45a4dafb65e494"} Jan 30 06:40:44 crc kubenswrapper[4931]: I0130 06:40:44.191808 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlzk5" event={"ID":"eb599a5d-5068-4fab-bf45-937582c34eca","Type":"ContainerStarted","Data":"ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade"} Jan 30 06:40:44 crc kubenswrapper[4931]: I0130 06:40:44.224267 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.224246092 podStartE2EDuration="3.224246092s" podCreationTimestamp="2026-01-30 06:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:40:44.212514494 +0000 UTC m=+5579.582424751" watchObservedRunningTime="2026-01-30 06:40:44.224246092 +0000 UTC m=+5579.594156359" Jan 30 06:40:44 crc kubenswrapper[4931]: I0130 06:40:44.494870 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 06:40:46 crc kubenswrapper[4931]: I0130 06:40:46.212496 4931 generic.go:334] "Generic (PLEG): container finished" podID="eb599a5d-5068-4fab-bf45-937582c34eca" containerID="ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade" exitCode=0 Jan 30 06:40:46 crc kubenswrapper[4931]: I0130 06:40:46.212671 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlzk5" event={"ID":"eb599a5d-5068-4fab-bf45-937582c34eca","Type":"ContainerDied","Data":"ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade"} Jan 30 06:40:46 crc kubenswrapper[4931]: I0130 06:40:46.592700 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 06:40:47 crc kubenswrapper[4931]: I0130 06:40:47.228294 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlzk5" event={"ID":"eb599a5d-5068-4fab-bf45-937582c34eca","Type":"ContainerStarted","Data":"0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f"} Jan 30 06:40:47 crc kubenswrapper[4931]: I0130 06:40:47.263039 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jlzk5" podStartSLOduration=2.567082178 podStartE2EDuration="6.263004655s" podCreationTimestamp="2026-01-30 06:40:41 +0000 UTC" firstStartedPulling="2026-01-30 06:40:43.180209574 +0000 UTC m=+5578.550119831" lastFinishedPulling="2026-01-30 06:40:46.876132051 +0000 UTC m=+5582.246042308" observedRunningTime="2026-01-30 06:40:47.255382581 +0000 UTC m=+5582.625292838" watchObservedRunningTime="2026-01-30 06:40:47.263004655 +0000 UTC m=+5582.632914952" Jan 30 06:40:51 crc kubenswrapper[4931]: I0130 06:40:51.782938 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 06:40:52 crc kubenswrapper[4931]: I0130 06:40:52.125406 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:52 crc kubenswrapper[4931]: I0130 06:40:52.125754 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:40:53 crc kubenswrapper[4931]: I0130 06:40:53.177358 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jlzk5" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="registry-server" probeResult="failure" output=< Jan 30 06:40:53 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 06:40:53 crc kubenswrapper[4931]: > Jan 30 06:40:57 crc kubenswrapper[4931]: I0130 06:40:57.363452 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:40:57 crc kubenswrapper[4931]: I0130 06:40:57.364169 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:41:02 crc kubenswrapper[4931]: I0130 06:41:02.188164 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:41:02 crc kubenswrapper[4931]: I0130 06:41:02.280628 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:41:02 crc kubenswrapper[4931]: I0130 06:41:02.433043 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlzk5"] Jan 30 06:41:03 crc kubenswrapper[4931]: I0130 06:41:03.426561 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jlzk5" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="registry-server" containerID="cri-o://0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f" gracePeriod=2 Jan 30 06:41:03 crc kubenswrapper[4931]: I0130 06:41:03.969483 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.093787 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgkbx\" (UniqueName: \"kubernetes.io/projected/eb599a5d-5068-4fab-bf45-937582c34eca-kube-api-access-sgkbx\") pod \"eb599a5d-5068-4fab-bf45-937582c34eca\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.093903 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-catalog-content\") pod \"eb599a5d-5068-4fab-bf45-937582c34eca\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.093927 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-utilities\") pod \"eb599a5d-5068-4fab-bf45-937582c34eca\" (UID: \"eb599a5d-5068-4fab-bf45-937582c34eca\") " Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.095781 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-utilities" (OuterVolumeSpecName: "utilities") pod "eb599a5d-5068-4fab-bf45-937582c34eca" (UID: "eb599a5d-5068-4fab-bf45-937582c34eca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.103090 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb599a5d-5068-4fab-bf45-937582c34eca-kube-api-access-sgkbx" (OuterVolumeSpecName: "kube-api-access-sgkbx") pod "eb599a5d-5068-4fab-bf45-937582c34eca" (UID: "eb599a5d-5068-4fab-bf45-937582c34eca"). InnerVolumeSpecName "kube-api-access-sgkbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.197354 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgkbx\" (UniqueName: \"kubernetes.io/projected/eb599a5d-5068-4fab-bf45-937582c34eca-kube-api-access-sgkbx\") on node \"crc\" DevicePath \"\"" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.197413 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.213787 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb599a5d-5068-4fab-bf45-937582c34eca" (UID: "eb599a5d-5068-4fab-bf45-937582c34eca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.299480 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb599a5d-5068-4fab-bf45-937582c34eca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.442538 4931 generic.go:334] "Generic (PLEG): container finished" podID="eb599a5d-5068-4fab-bf45-937582c34eca" containerID="0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f" exitCode=0 Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.442600 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlzk5" event={"ID":"eb599a5d-5068-4fab-bf45-937582c34eca","Type":"ContainerDied","Data":"0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f"} Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.442640 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlzk5" event={"ID":"eb599a5d-5068-4fab-bf45-937582c34eca","Type":"ContainerDied","Data":"f05b848343471061d40f87134776e9f8a1736a2776fb3239d3e410c22aaedae8"} Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.442668 4931 scope.go:117] "RemoveContainer" containerID="0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.442872 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlzk5" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.509322 4931 scope.go:117] "RemoveContainer" containerID="ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.509454 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlzk5"] Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.533138 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jlzk5"] Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.550553 4931 scope.go:117] "RemoveContainer" containerID="53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.593042 4931 scope.go:117] "RemoveContainer" containerID="0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f" Jan 30 06:41:04 crc kubenswrapper[4931]: E0130 06:41:04.593460 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f\": container with ID starting with 0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f not found: ID does not exist" containerID="0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.593493 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f"} err="failed to get container status \"0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f\": rpc error: code = NotFound desc = could not find container \"0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f\": container with ID starting with 0ee05a3ad3b0c1019b7e1d336a82186f59b7cf43c43a12a8b93665177821ed8f not found: ID does not exist" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.593514 4931 scope.go:117] "RemoveContainer" containerID="ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade" Jan 30 06:41:04 crc kubenswrapper[4931]: E0130 06:41:04.593801 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade\": container with ID starting with ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade not found: ID does not exist" containerID="ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.593858 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade"} err="failed to get container status \"ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade\": rpc error: code = NotFound desc = could not find container \"ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade\": container with ID starting with ba3091fc93684e0350a82996dc3bab508983b4d574fa3f6038591e98d98c1ade not found: ID does not exist" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.593876 4931 scope.go:117] "RemoveContainer" containerID="53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e" Jan 30 06:41:04 crc kubenswrapper[4931]: E0130 06:41:04.594196 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e\": container with ID starting with 53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e not found: ID does not exist" containerID="53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e" Jan 30 06:41:04 crc kubenswrapper[4931]: I0130 06:41:04.594222 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e"} err="failed to get container status \"53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e\": rpc error: code = NotFound desc = could not find container \"53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e\": container with ID starting with 53f457b98adb8f7e5c0cd7641b26244cf7089d77f7f8044d2f7dcc33e6146a2e not found: ID does not exist" Jan 30 06:41:05 crc kubenswrapper[4931]: I0130 06:41:05.438620 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" path="/var/lib/kubelet/pods/eb599a5d-5068-4fab-bf45-937582c34eca/volumes" Jan 30 06:41:27 crc kubenswrapper[4931]: I0130 06:41:27.363264 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:41:27 crc kubenswrapper[4931]: I0130 06:41:27.363970 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:41:57 crc kubenswrapper[4931]: I0130 06:41:57.363048 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:41:57 crc kubenswrapper[4931]: I0130 06:41:57.363695 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:41:57 crc kubenswrapper[4931]: I0130 06:41:57.363995 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:41:57 crc kubenswrapper[4931]: I0130 06:41:57.365121 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:41:57 crc kubenswrapper[4931]: I0130 06:41:57.365193 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" gracePeriod=600 Jan 30 06:41:57 crc kubenswrapper[4931]: E0130 06:41:57.493624 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:41:58 crc kubenswrapper[4931]: I0130 06:41:58.062015 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" exitCode=0 Jan 30 06:41:58 crc kubenswrapper[4931]: I0130 06:41:58.062077 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7"} Jan 30 06:41:58 crc kubenswrapper[4931]: I0130 06:41:58.062123 4931 scope.go:117] "RemoveContainer" containerID="7acc7dd93c5206c6ff8ebf3271a041083dbd9bcf0e00cc88a42d6c0b4c7429dd" Jan 30 06:41:58 crc kubenswrapper[4931]: I0130 06:41:58.062957 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:41:58 crc kubenswrapper[4931]: E0130 06:41:58.063529 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:42:12 crc kubenswrapper[4931]: I0130 06:42:12.422520 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:42:12 crc kubenswrapper[4931]: E0130 06:42:12.423465 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.093745 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bn5b4"] Jan 30 06:42:13 crc kubenswrapper[4931]: E0130 06:42:13.094490 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="extract-content" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.094521 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="extract-content" Jan 30 06:42:13 crc kubenswrapper[4931]: E0130 06:42:13.094570 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="registry-server" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.094583 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="registry-server" Jan 30 06:42:13 crc kubenswrapper[4931]: E0130 06:42:13.094610 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="extract-utilities" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.094625 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="extract-utilities" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.094935 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb599a5d-5068-4fab-bf45-937582c34eca" containerName="registry-server" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.097320 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.111359 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn5b4"] Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.201770 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9cc\" (UniqueName: \"kubernetes.io/projected/b9cea615-7c24-42ec-b0b3-ba654afb5e48-kube-api-access-6f9cc\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.201828 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-utilities\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.201889 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-catalog-content\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.303943 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9cc\" (UniqueName: \"kubernetes.io/projected/b9cea615-7c24-42ec-b0b3-ba654afb5e48-kube-api-access-6f9cc\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.304027 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-utilities\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.304099 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-catalog-content\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.305036 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-catalog-content\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.305194 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-utilities\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.330608 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9cc\" (UniqueName: \"kubernetes.io/projected/b9cea615-7c24-42ec-b0b3-ba654afb5e48-kube-api-access-6f9cc\") pod \"certified-operators-bn5b4\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.438534 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:13 crc kubenswrapper[4931]: I0130 06:42:13.990109 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn5b4"] Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.242500 4931 generic.go:334] "Generic (PLEG): container finished" podID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerID="8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf" exitCode=0 Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.242540 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn5b4" event={"ID":"b9cea615-7c24-42ec-b0b3-ba654afb5e48","Type":"ContainerDied","Data":"8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf"} Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.242568 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn5b4" event={"ID":"b9cea615-7c24-42ec-b0b3-ba654afb5e48","Type":"ContainerStarted","Data":"07b667d5a687f0233563586b73c3bec225ac3979ee53165d693ac755c7d56b48"} Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.512337 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f58fk"] Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.518780 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.533709 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f58fk"] Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.633433 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-catalog-content\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.633659 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqjz\" (UniqueName: \"kubernetes.io/projected/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-kube-api-access-qpqjz\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.633803 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-utilities\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.735730 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-utilities\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.735858 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-catalog-content\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.735900 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqjz\" (UniqueName: \"kubernetes.io/projected/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-kube-api-access-qpqjz\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.736465 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-catalog-content\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.737724 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-utilities\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.755818 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqjz\" (UniqueName: \"kubernetes.io/projected/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-kube-api-access-qpqjz\") pod \"redhat-marketplace-f58fk\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:14 crc kubenswrapper[4931]: I0130 06:42:14.853744 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:15 crc kubenswrapper[4931]: I0130 06:42:15.324076 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f58fk"] Jan 30 06:42:15 crc kubenswrapper[4931]: W0130 06:42:15.331593 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a3b4ace_6e1f_47cb_86a9_e5488138abc1.slice/crio-95b1f6f5d405e856dde3770b24820eb431009b98160bff8df51161ca28427373 WatchSource:0}: Error finding container 95b1f6f5d405e856dde3770b24820eb431009b98160bff8df51161ca28427373: Status 404 returned error can't find the container with id 95b1f6f5d405e856dde3770b24820eb431009b98160bff8df51161ca28427373 Jan 30 06:42:16 crc kubenswrapper[4931]: I0130 06:42:16.268017 4931 generic.go:334] "Generic (PLEG): container finished" podID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerID="6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35" exitCode=0 Jan 30 06:42:16 crc kubenswrapper[4931]: I0130 06:42:16.268097 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn5b4" event={"ID":"b9cea615-7c24-42ec-b0b3-ba654afb5e48","Type":"ContainerDied","Data":"6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35"} Jan 30 06:42:16 crc kubenswrapper[4931]: I0130 06:42:16.271083 4931 generic.go:334] "Generic (PLEG): container finished" podID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerID="82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560" exitCode=0 Jan 30 06:42:16 crc kubenswrapper[4931]: I0130 06:42:16.271125 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f58fk" event={"ID":"3a3b4ace-6e1f-47cb-86a9-e5488138abc1","Type":"ContainerDied","Data":"82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560"} Jan 30 06:42:16 crc kubenswrapper[4931]: I0130 06:42:16.271151 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f58fk" event={"ID":"3a3b4ace-6e1f-47cb-86a9-e5488138abc1","Type":"ContainerStarted","Data":"95b1f6f5d405e856dde3770b24820eb431009b98160bff8df51161ca28427373"} Jan 30 06:42:17 crc kubenswrapper[4931]: I0130 06:42:17.283504 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn5b4" event={"ID":"b9cea615-7c24-42ec-b0b3-ba654afb5e48","Type":"ContainerStarted","Data":"68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf"} Jan 30 06:42:17 crc kubenswrapper[4931]: I0130 06:42:17.286627 4931 generic.go:334] "Generic (PLEG): container finished" podID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerID="c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3" exitCode=0 Jan 30 06:42:17 crc kubenswrapper[4931]: I0130 06:42:17.286697 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f58fk" event={"ID":"3a3b4ace-6e1f-47cb-86a9-e5488138abc1","Type":"ContainerDied","Data":"c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3"} Jan 30 06:42:17 crc kubenswrapper[4931]: I0130 06:42:17.327458 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bn5b4" podStartSLOduration=1.8615570830000001 podStartE2EDuration="4.327412077s" podCreationTimestamp="2026-01-30 06:42:13 +0000 UTC" firstStartedPulling="2026-01-30 06:42:14.244013436 +0000 UTC m=+5669.613923683" lastFinishedPulling="2026-01-30 06:42:16.70986838 +0000 UTC m=+5672.079778677" observedRunningTime="2026-01-30 06:42:17.304004442 +0000 UTC m=+5672.673914699" watchObservedRunningTime="2026-01-30 06:42:17.327412077 +0000 UTC m=+5672.697322354" Jan 30 06:42:18 crc kubenswrapper[4931]: I0130 06:42:18.298622 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f58fk" event={"ID":"3a3b4ace-6e1f-47cb-86a9-e5488138abc1","Type":"ContainerStarted","Data":"33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097"} Jan 30 06:42:18 crc kubenswrapper[4931]: I0130 06:42:18.331405 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f58fk" podStartSLOduration=2.914215787 podStartE2EDuration="4.331379904s" podCreationTimestamp="2026-01-30 06:42:14 +0000 UTC" firstStartedPulling="2026-01-30 06:42:16.275030105 +0000 UTC m=+5671.644940402" lastFinishedPulling="2026-01-30 06:42:17.692194252 +0000 UTC m=+5673.062104519" observedRunningTime="2026-01-30 06:42:18.323238757 +0000 UTC m=+5673.693149034" watchObservedRunningTime="2026-01-30 06:42:18.331379904 +0000 UTC m=+5673.701290191" Jan 30 06:42:23 crc kubenswrapper[4931]: I0130 06:42:23.445583 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:23 crc kubenswrapper[4931]: I0130 06:42:23.446263 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:23 crc kubenswrapper[4931]: I0130 06:42:23.534667 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:24 crc kubenswrapper[4931]: I0130 06:42:24.447628 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:24 crc kubenswrapper[4931]: I0130 06:42:24.525972 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn5b4"] Jan 30 06:42:24 crc kubenswrapper[4931]: I0130 06:42:24.854139 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:24 crc kubenswrapper[4931]: I0130 06:42:24.854249 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:24 crc kubenswrapper[4931]: I0130 06:42:24.921808 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:25 crc kubenswrapper[4931]: I0130 06:42:25.475851 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:26 crc kubenswrapper[4931]: I0130 06:42:26.403348 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bn5b4" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="registry-server" containerID="cri-o://68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf" gracePeriod=2 Jan 30 06:42:26 crc kubenswrapper[4931]: I0130 06:42:26.423134 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:42:26 crc kubenswrapper[4931]: E0130 06:42:26.423592 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:42:26 crc kubenswrapper[4931]: I0130 06:42:26.878324 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f58fk"] Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.002349 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.111047 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-catalog-content\") pod \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.111410 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-utilities\") pod \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.111789 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9cc\" (UniqueName: \"kubernetes.io/projected/b9cea615-7c24-42ec-b0b3-ba654afb5e48-kube-api-access-6f9cc\") pod \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\" (UID: \"b9cea615-7c24-42ec-b0b3-ba654afb5e48\") " Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.112667 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-utilities" (OuterVolumeSpecName: "utilities") pod "b9cea615-7c24-42ec-b0b3-ba654afb5e48" (UID: "b9cea615-7c24-42ec-b0b3-ba654afb5e48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.112996 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.121989 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9cea615-7c24-42ec-b0b3-ba654afb5e48-kube-api-access-6f9cc" (OuterVolumeSpecName: "kube-api-access-6f9cc") pod "b9cea615-7c24-42ec-b0b3-ba654afb5e48" (UID: "b9cea615-7c24-42ec-b0b3-ba654afb5e48"). InnerVolumeSpecName "kube-api-access-6f9cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.183144 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9cea615-7c24-42ec-b0b3-ba654afb5e48" (UID: "b9cea615-7c24-42ec-b0b3-ba654afb5e48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.216349 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9cc\" (UniqueName: \"kubernetes.io/projected/b9cea615-7c24-42ec-b0b3-ba654afb5e48-kube-api-access-6f9cc\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.216467 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9cea615-7c24-42ec-b0b3-ba654afb5e48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.438355 4931 generic.go:334] "Generic (PLEG): container finished" podID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerID="68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf" exitCode=0 Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.438612 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn5b4" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.439461 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f58fk" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="registry-server" containerID="cri-o://33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097" gracePeriod=2 Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.454736 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn5b4" event={"ID":"b9cea615-7c24-42ec-b0b3-ba654afb5e48","Type":"ContainerDied","Data":"68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf"} Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.454820 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn5b4" event={"ID":"b9cea615-7c24-42ec-b0b3-ba654afb5e48","Type":"ContainerDied","Data":"07b667d5a687f0233563586b73c3bec225ac3979ee53165d693ac755c7d56b48"} Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.454862 4931 scope.go:117] "RemoveContainer" containerID="68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.500474 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn5b4"] Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.503190 4931 scope.go:117] "RemoveContainer" containerID="6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.511147 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bn5b4"] Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.574891 4931 scope.go:117] "RemoveContainer" containerID="8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.671481 4931 scope.go:117] "RemoveContainer" containerID="68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf" Jan 30 06:42:27 crc kubenswrapper[4931]: E0130 06:42:27.672378 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf\": container with ID starting with 68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf not found: ID does not exist" containerID="68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.672478 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf"} err="failed to get container status \"68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf\": rpc error: code = NotFound desc = could not find container \"68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf\": container with ID starting with 68259ee6ca07d14b60b659f82f59dfb82a716e0e1c7f6e1d45171b2eadc5aecf not found: ID does not exist" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.672521 4931 scope.go:117] "RemoveContainer" containerID="6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35" Jan 30 06:42:27 crc kubenswrapper[4931]: E0130 06:42:27.673123 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35\": container with ID starting with 6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35 not found: ID does not exist" containerID="6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.673163 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35"} err="failed to get container status \"6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35\": rpc error: code = NotFound desc = could not find container \"6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35\": container with ID starting with 6bd86b79bf75d3dfdcb96d3398f82d4396a81b534462fd1fef006bc55f442a35 not found: ID does not exist" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.673190 4931 scope.go:117] "RemoveContainer" containerID="8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf" Jan 30 06:42:27 crc kubenswrapper[4931]: E0130 06:42:27.673916 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf\": container with ID starting with 8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf not found: ID does not exist" containerID="8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.673959 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf"} err="failed to get container status \"8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf\": rpc error: code = NotFound desc = could not find container \"8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf\": container with ID starting with 8d15430013f5c26b2602305fbef33139ae2278fae40d50766e8c390fcc24c7cf not found: ID does not exist" Jan 30 06:42:27 crc kubenswrapper[4931]: I0130 06:42:27.998187 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.033855 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-catalog-content\") pod \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.033957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-utilities\") pod \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.034032 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpqjz\" (UniqueName: \"kubernetes.io/projected/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-kube-api-access-qpqjz\") pod \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\" (UID: \"3a3b4ace-6e1f-47cb-86a9-e5488138abc1\") " Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.041318 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-utilities" (OuterVolumeSpecName: "utilities") pod "3a3b4ace-6e1f-47cb-86a9-e5488138abc1" (UID: "3a3b4ace-6e1f-47cb-86a9-e5488138abc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.041530 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-kube-api-access-qpqjz" (OuterVolumeSpecName: "kube-api-access-qpqjz") pod "3a3b4ace-6e1f-47cb-86a9-e5488138abc1" (UID: "3a3b4ace-6e1f-47cb-86a9-e5488138abc1"). InnerVolumeSpecName "kube-api-access-qpqjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.077561 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a3b4ace-6e1f-47cb-86a9-e5488138abc1" (UID: "3a3b4ace-6e1f-47cb-86a9-e5488138abc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.135925 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.136219 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.136302 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpqjz\" (UniqueName: \"kubernetes.io/projected/3a3b4ace-6e1f-47cb-86a9-e5488138abc1-kube-api-access-qpqjz\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.447853 4931 generic.go:334] "Generic (PLEG): container finished" podID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerID="33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097" exitCode=0 Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.447907 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f58fk" event={"ID":"3a3b4ace-6e1f-47cb-86a9-e5488138abc1","Type":"ContainerDied","Data":"33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097"} Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.447938 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f58fk" event={"ID":"3a3b4ace-6e1f-47cb-86a9-e5488138abc1","Type":"ContainerDied","Data":"95b1f6f5d405e856dde3770b24820eb431009b98160bff8df51161ca28427373"} Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.447955 4931 scope.go:117] "RemoveContainer" containerID="33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.448046 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f58fk" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.481209 4931 scope.go:117] "RemoveContainer" containerID="c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.489399 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f58fk"] Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.500038 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f58fk"] Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.504297 4931 scope.go:117] "RemoveContainer" containerID="82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.535998 4931 scope.go:117] "RemoveContainer" containerID="33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097" Jan 30 06:42:28 crc kubenswrapper[4931]: E0130 06:42:28.536361 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097\": container with ID starting with 33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097 not found: ID does not exist" containerID="33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.536392 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097"} err="failed to get container status \"33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097\": rpc error: code = NotFound desc = could not find container \"33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097\": container with ID starting with 33b05cbdc1617d484d891da5422fefa3e3f3db732ebd7f3db528bd94dad2d097 not found: ID does not exist" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.536412 4931 scope.go:117] "RemoveContainer" containerID="c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3" Jan 30 06:42:28 crc kubenswrapper[4931]: E0130 06:42:28.536598 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3\": container with ID starting with c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3 not found: ID does not exist" containerID="c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.536623 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3"} err="failed to get container status \"c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3\": rpc error: code = NotFound desc = could not find container \"c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3\": container with ID starting with c8aa8b279e240894abe786d23c00883f82a3c5a04ca06c870773fa7f9e0837a3 not found: ID does not exist" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.536635 4931 scope.go:117] "RemoveContainer" containerID="82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560" Jan 30 06:42:28 crc kubenswrapper[4931]: E0130 06:42:28.537040 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560\": container with ID starting with 82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560 not found: ID does not exist" containerID="82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560" Jan 30 06:42:28 crc kubenswrapper[4931]: I0130 06:42:28.537062 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560"} err="failed to get container status \"82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560\": rpc error: code = NotFound desc = could not find container \"82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560\": container with ID starting with 82ffd628dbc1781de1a42d2d3dfd1ecf27324557af794f3be3a9579fbf4a1560 not found: ID does not exist" Jan 30 06:42:29 crc kubenswrapper[4931]: I0130 06:42:29.435953 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" path="/var/lib/kubelet/pods/3a3b4ace-6e1f-47cb-86a9-e5488138abc1/volumes" Jan 30 06:42:29 crc kubenswrapper[4931]: I0130 06:42:29.436914 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" path="/var/lib/kubelet/pods/b9cea615-7c24-42ec-b0b3-ba654afb5e48/volumes" Jan 30 06:42:31 crc kubenswrapper[4931]: I0130 06:42:31.068896 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4gjjc"] Jan 30 06:42:31 crc kubenswrapper[4931]: I0130 06:42:31.081148 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9c4e-account-create-update-vz7cn"] Jan 30 06:42:31 crc kubenswrapper[4931]: I0130 06:42:31.093100 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4gjjc"] Jan 30 06:42:31 crc kubenswrapper[4931]: I0130 06:42:31.103756 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9c4e-account-create-update-vz7cn"] Jan 30 06:42:31 crc kubenswrapper[4931]: I0130 06:42:31.443359 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b58826-6a83-4c91-a9f7-8c6c861c509b" path="/var/lib/kubelet/pods/d5b58826-6a83-4c91-a9f7-8c6c861c509b/volumes" Jan 30 06:42:31 crc kubenswrapper[4931]: I0130 06:42:31.444717 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95012d1-b402-48eb-baf4-36fabfd1e4f2" path="/var/lib/kubelet/pods/d95012d1-b402-48eb-baf4-36fabfd1e4f2/volumes" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.025253 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cgvfd"] Jan 30 06:42:33 crc kubenswrapper[4931]: E0130 06:42:33.025845 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="extract-content" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.025869 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="extract-content" Jan 30 06:42:33 crc kubenswrapper[4931]: E0130 06:42:33.025904 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="extract-content" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.025916 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="extract-content" Jan 30 06:42:33 crc kubenswrapper[4931]: E0130 06:42:33.025936 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="extract-utilities" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.025947 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="extract-utilities" Jan 30 06:42:33 crc kubenswrapper[4931]: E0130 06:42:33.025972 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="extract-utilities" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.025982 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="extract-utilities" Jan 30 06:42:33 crc kubenswrapper[4931]: E0130 06:42:33.026008 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="registry-server" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.026022 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="registry-server" Jan 30 06:42:33 crc kubenswrapper[4931]: E0130 06:42:33.026039 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="registry-server" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.026050 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="registry-server" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.026320 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3b4ace-6e1f-47cb-86a9-e5488138abc1" containerName="registry-server" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.026338 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9cea615-7c24-42ec-b0b3-ba654afb5e48" containerName="registry-server" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.027348 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.029122 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-run-ovn\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.029166 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-log-ovn\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.029214 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-scripts\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.029404 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-run\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.029577 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66sx7\" (UniqueName: \"kubernetes.io/projected/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-kube-api-access-66sx7\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.037000 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.037030 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jzqpv" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.050564 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cgvfd"] Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.063862 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5tgfr"] Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.065681 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.081198 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5tgfr"] Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.131035 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66sx7\" (UniqueName: \"kubernetes.io/projected/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-kube-api-access-66sx7\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.131127 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-run-ovn\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.131157 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-log-ovn\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.131195 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-scripts\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.131242 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-run\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.131596 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-run\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.132622 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-run-ovn\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.132674 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-var-log-ovn\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.134386 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-scripts\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.177328 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66sx7\" (UniqueName: \"kubernetes.io/projected/9bf15e4b-1a09-401b-87e9-97cff0ee8c91-kube-api-access-66sx7\") pod \"ovn-controller-cgvfd\" (UID: \"9bf15e4b-1a09-401b-87e9-97cff0ee8c91\") " pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.232459 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-log\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.232792 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-etc-ovs\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.232894 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-lib\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.232984 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjmd\" (UniqueName: \"kubernetes.io/projected/56a7c911-151f-42ff-b005-58bdaecd5d8b-kube-api-access-2sjmd\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.233067 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-run\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.233190 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56a7c911-151f-42ff-b005-58bdaecd5d8b-scripts\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.334837 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-log\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.335704 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-etc-ovs\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.335887 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-lib\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.336040 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjmd\" (UniqueName: \"kubernetes.io/projected/56a7c911-151f-42ff-b005-58bdaecd5d8b-kube-api-access-2sjmd\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.336393 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-run\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.336599 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56a7c911-151f-42ff-b005-58bdaecd5d8b-scripts\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.335851 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-etc-ovs\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.335311 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-log\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.336545 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-run\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.335997 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/56a7c911-151f-42ff-b005-58bdaecd5d8b-var-lib\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.338449 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56a7c911-151f-42ff-b005-58bdaecd5d8b-scripts\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.355494 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.356064 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjmd\" (UniqueName: \"kubernetes.io/projected/56a7c911-151f-42ff-b005-58bdaecd5d8b-kube-api-access-2sjmd\") pod \"ovn-controller-ovs-5tgfr\" (UID: \"56a7c911-151f-42ff-b005-58bdaecd5d8b\") " pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.380765 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:33 crc kubenswrapper[4931]: I0130 06:42:33.841513 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cgvfd"] Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.295118 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5tgfr"] Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.427541 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-cs66w"] Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.430202 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.434719 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.466263 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cs66w"] Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.475079 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-ovs-rundir\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.475184 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-config\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.476391 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjwh\" (UniqueName: \"kubernetes.io/projected/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-kube-api-access-gtjwh\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.476439 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-ovn-rundir\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.528289 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cgvfd" event={"ID":"9bf15e4b-1a09-401b-87e9-97cff0ee8c91","Type":"ContainerStarted","Data":"0c3c3eeca6c73b3ff16722355fd66f5c651bc1882cab39286a44a1e8ad76c897"} Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.528329 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cgvfd" event={"ID":"9bf15e4b-1a09-401b-87e9-97cff0ee8c91","Type":"ContainerStarted","Data":"5d07d9338e1c0281f7605804ef343063b26310b357099627addd260ceaf76b53"} Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.530410 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cgvfd" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.532211 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5tgfr" event={"ID":"56a7c911-151f-42ff-b005-58bdaecd5d8b","Type":"ContainerStarted","Data":"396cc406a2c1dc92073d551789c027ec668df6ffb85ebe2a4b91603199d49adc"} Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.556550 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cgvfd" podStartSLOduration=2.556529366 podStartE2EDuration="2.556529366s" podCreationTimestamp="2026-01-30 06:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:42:34.548011268 +0000 UTC m=+5689.917921535" watchObservedRunningTime="2026-01-30 06:42:34.556529366 +0000 UTC m=+5689.926439623" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.577639 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-config\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.578666 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtjwh\" (UniqueName: \"kubernetes.io/projected/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-kube-api-access-gtjwh\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.578743 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-ovn-rundir\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.578896 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-ovs-rundir\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.579184 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-ovs-rundir\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.580143 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-config\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.580397 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-ovn-rundir\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.603112 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtjwh\" (UniqueName: \"kubernetes.io/projected/608bb576-83fd-4c7c-b8b3-a4f9ff46b661-kube-api-access-gtjwh\") pod \"ovn-controller-metrics-cs66w\" (UID: \"608bb576-83fd-4c7c-b8b3-a4f9ff46b661\") " pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.720231 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-5v2g5"] Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.724354 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.737532 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5v2g5"] Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.770466 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cs66w" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.781321 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-operator-scripts\") pod \"octavia-db-create-5v2g5\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.781377 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgbxs\" (UniqueName: \"kubernetes.io/projected/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-kube-api-access-rgbxs\") pod \"octavia-db-create-5v2g5\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.888518 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-operator-scripts\") pod \"octavia-db-create-5v2g5\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.888866 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgbxs\" (UniqueName: \"kubernetes.io/projected/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-kube-api-access-rgbxs\") pod \"octavia-db-create-5v2g5\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.890137 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-operator-scripts\") pod \"octavia-db-create-5v2g5\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:34 crc kubenswrapper[4931]: I0130 06:42:34.914637 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgbxs\" (UniqueName: \"kubernetes.io/projected/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-kube-api-access-rgbxs\") pod \"octavia-db-create-5v2g5\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.043823 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.108504 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cs66w"] Jan 30 06:42:35 crc kubenswrapper[4931]: W0130 06:42:35.116075 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod608bb576_83fd_4c7c_b8b3_a4f9ff46b661.slice/crio-870d92e75412e44964d9ea5d91d90956408a5d712425501bbc09693af95cd863 WatchSource:0}: Error finding container 870d92e75412e44964d9ea5d91d90956408a5d712425501bbc09693af95cd863: Status 404 returned error can't find the container with id 870d92e75412e44964d9ea5d91d90956408a5d712425501bbc09693af95cd863 Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.531544 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-5v2g5"] Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.552125 4931 generic.go:334] "Generic (PLEG): container finished" podID="56a7c911-151f-42ff-b005-58bdaecd5d8b" containerID="ed613fb39544acb199b57f50f713767fc77e2920f1d258851689a1db05c78b4b" exitCode=0 Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.552483 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5tgfr" event={"ID":"56a7c911-151f-42ff-b005-58bdaecd5d8b","Type":"ContainerDied","Data":"ed613fb39544acb199b57f50f713767fc77e2920f1d258851689a1db05c78b4b"} Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.556733 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cs66w" event={"ID":"608bb576-83fd-4c7c-b8b3-a4f9ff46b661","Type":"ContainerStarted","Data":"db61070dbe381cafba8aa9a2ec0fa041b77e8020fdbc00408666a5001a641f61"} Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.556760 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cs66w" event={"ID":"608bb576-83fd-4c7c-b8b3-a4f9ff46b661","Type":"ContainerStarted","Data":"870d92e75412e44964d9ea5d91d90956408a5d712425501bbc09693af95cd863"} Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.564101 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5v2g5" event={"ID":"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031","Type":"ContainerStarted","Data":"4f31ce8325d581327e30dec1b76d7448dbaa5c149d47de8e58a7992f4f878229"} Jan 30 06:42:35 crc kubenswrapper[4931]: I0130 06:42:35.600033 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-cs66w" podStartSLOduration=1.600015049 podStartE2EDuration="1.600015049s" podCreationTimestamp="2026-01-30 06:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:42:35.585384379 +0000 UTC m=+5690.955294636" watchObservedRunningTime="2026-01-30 06:42:35.600015049 +0000 UTC m=+5690.969925306" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.361242 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db99-account-create-update-8dpb5"] Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.363143 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.366631 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.375933 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db99-account-create-update-8dpb5"] Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.431544 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-685pf\" (UniqueName: \"kubernetes.io/projected/21268b76-c5b2-457f-a433-ff2da3b9bd10-kube-api-access-685pf\") pod \"octavia-db99-account-create-update-8dpb5\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.431920 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21268b76-c5b2-457f-a433-ff2da3b9bd10-operator-scripts\") pod \"octavia-db99-account-create-update-8dpb5\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.534005 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-685pf\" (UniqueName: \"kubernetes.io/projected/21268b76-c5b2-457f-a433-ff2da3b9bd10-kube-api-access-685pf\") pod \"octavia-db99-account-create-update-8dpb5\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.534124 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21268b76-c5b2-457f-a433-ff2da3b9bd10-operator-scripts\") pod \"octavia-db99-account-create-update-8dpb5\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.535202 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21268b76-c5b2-457f-a433-ff2da3b9bd10-operator-scripts\") pod \"octavia-db99-account-create-update-8dpb5\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.555372 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-685pf\" (UniqueName: \"kubernetes.io/projected/21268b76-c5b2-457f-a433-ff2da3b9bd10-kube-api-access-685pf\") pod \"octavia-db99-account-create-update-8dpb5\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.572874 4931 generic.go:334] "Generic (PLEG): container finished" podID="cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031" containerID="fc2609bf05101f454b500a411b2af7bec596ed7b2a503264f64b371462ed10d1" exitCode=0 Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.572964 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5v2g5" event={"ID":"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031","Type":"ContainerDied","Data":"fc2609bf05101f454b500a411b2af7bec596ed7b2a503264f64b371462ed10d1"} Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.575375 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5tgfr" event={"ID":"56a7c911-151f-42ff-b005-58bdaecd5d8b","Type":"ContainerStarted","Data":"89c255ef545094cd8326f3a470ab7d7a2d1bab357be8a008fe08a91ef20fb55e"} Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.575449 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5tgfr" event={"ID":"56a7c911-151f-42ff-b005-58bdaecd5d8b","Type":"ContainerStarted","Data":"929ba428a303a75db914f2ca8c52c2f16380b2e1370ef0fef0e2e21040d610dd"} Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.575650 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.627590 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5tgfr" podStartSLOduration=4.627566656 podStartE2EDuration="4.627566656s" podCreationTimestamp="2026-01-30 06:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:42:36.618693588 +0000 UTC m=+5691.988603865" watchObservedRunningTime="2026-01-30 06:42:36.627566656 +0000 UTC m=+5691.997476913" Jan 30 06:42:36 crc kubenswrapper[4931]: I0130 06:42:36.681324 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.049601 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ckz5s"] Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.058764 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ckz5s"] Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.178377 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db99-account-create-update-8dpb5"] Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.435968 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65588685-2245-486d-b7a9-95b8a71f8ff7" path="/var/lib/kubelet/pods/65588685-2245-486d-b7a9-95b8a71f8ff7/volumes" Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.590760 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db99-account-create-update-8dpb5" event={"ID":"21268b76-c5b2-457f-a433-ff2da3b9bd10","Type":"ContainerStarted","Data":"e4d54f2baf7c89b6f0656a4f82e037b9be01132659b5c11c20729732cbeb53c0"} Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.590837 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db99-account-create-update-8dpb5" event={"ID":"21268b76-c5b2-457f-a433-ff2da3b9bd10","Type":"ContainerStarted","Data":"5bd83a89b959b37a4f5ed3c77a498449419d8ecd7c9dcba1f993459e25b6e2c3"} Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.591272 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.612399 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db99-account-create-update-8dpb5" podStartSLOduration=1.612354586 podStartE2EDuration="1.612354586s" podCreationTimestamp="2026-01-30 06:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:42:37.60854118 +0000 UTC m=+5692.978451487" watchObservedRunningTime="2026-01-30 06:42:37.612354586 +0000 UTC m=+5692.982264883" Jan 30 06:42:37 crc kubenswrapper[4931]: I0130 06:42:37.988210 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.069611 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgbxs\" (UniqueName: \"kubernetes.io/projected/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-kube-api-access-rgbxs\") pod \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.069720 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-operator-scripts\") pod \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\" (UID: \"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031\") " Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.070228 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031" (UID: "cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.070604 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.075793 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-kube-api-access-rgbxs" (OuterVolumeSpecName: "kube-api-access-rgbxs") pod "cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031" (UID: "cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031"). InnerVolumeSpecName "kube-api-access-rgbxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.172631 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgbxs\" (UniqueName: \"kubernetes.io/projected/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031-kube-api-access-rgbxs\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.606183 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-5v2g5" event={"ID":"cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031","Type":"ContainerDied","Data":"4f31ce8325d581327e30dec1b76d7448dbaa5c149d47de8e58a7992f4f878229"} Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.606244 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f31ce8325d581327e30dec1b76d7448dbaa5c149d47de8e58a7992f4f878229" Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.606338 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-5v2g5" Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.609808 4931 generic.go:334] "Generic (PLEG): container finished" podID="21268b76-c5b2-457f-a433-ff2da3b9bd10" containerID="e4d54f2baf7c89b6f0656a4f82e037b9be01132659b5c11c20729732cbeb53c0" exitCode=0 Jan 30 06:42:38 crc kubenswrapper[4931]: I0130 06:42:38.609880 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db99-account-create-update-8dpb5" event={"ID":"21268b76-c5b2-457f-a433-ff2da3b9bd10","Type":"ContainerDied","Data":"e4d54f2baf7c89b6f0656a4f82e037b9be01132659b5c11c20729732cbeb53c0"} Jan 30 06:42:39 crc kubenswrapper[4931]: I0130 06:42:39.422330 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:42:39 crc kubenswrapper[4931]: E0130 06:42:39.423100 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.129898 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.214465 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-685pf\" (UniqueName: \"kubernetes.io/projected/21268b76-c5b2-457f-a433-ff2da3b9bd10-kube-api-access-685pf\") pod \"21268b76-c5b2-457f-a433-ff2da3b9bd10\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.214773 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21268b76-c5b2-457f-a433-ff2da3b9bd10-operator-scripts\") pod \"21268b76-c5b2-457f-a433-ff2da3b9bd10\" (UID: \"21268b76-c5b2-457f-a433-ff2da3b9bd10\") " Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.215590 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21268b76-c5b2-457f-a433-ff2da3b9bd10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21268b76-c5b2-457f-a433-ff2da3b9bd10" (UID: "21268b76-c5b2-457f-a433-ff2da3b9bd10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.223948 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21268b76-c5b2-457f-a433-ff2da3b9bd10-kube-api-access-685pf" (OuterVolumeSpecName: "kube-api-access-685pf") pod "21268b76-c5b2-457f-a433-ff2da3b9bd10" (UID: "21268b76-c5b2-457f-a433-ff2da3b9bd10"). InnerVolumeSpecName "kube-api-access-685pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.326369 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21268b76-c5b2-457f-a433-ff2da3b9bd10-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.326890 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-685pf\" (UniqueName: \"kubernetes.io/projected/21268b76-c5b2-457f-a433-ff2da3b9bd10-kube-api-access-685pf\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.641115 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db99-account-create-update-8dpb5" event={"ID":"21268b76-c5b2-457f-a433-ff2da3b9bd10","Type":"ContainerDied","Data":"5bd83a89b959b37a4f5ed3c77a498449419d8ecd7c9dcba1f993459e25b6e2c3"} Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.641174 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd83a89b959b37a4f5ed3c77a498449419d8ecd7c9dcba1f993459e25b6e2c3" Jan 30 06:42:40 crc kubenswrapper[4931]: I0130 06:42:40.641265 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db99-account-create-update-8dpb5" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.351415 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-scftb"] Jan 30 06:42:42 crc kubenswrapper[4931]: E0130 06:42:42.352169 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21268b76-c5b2-457f-a433-ff2da3b9bd10" containerName="mariadb-account-create-update" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.352187 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="21268b76-c5b2-457f-a433-ff2da3b9bd10" containerName="mariadb-account-create-update" Jan 30 06:42:42 crc kubenswrapper[4931]: E0130 06:42:42.352204 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031" containerName="mariadb-database-create" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.352212 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031" containerName="mariadb-database-create" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.352462 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031" containerName="mariadb-database-create" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.352498 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="21268b76-c5b2-457f-a433-ff2da3b9bd10" containerName="mariadb-account-create-update" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.353199 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.377872 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-scftb"] Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.466224 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd2kj\" (UniqueName: \"kubernetes.io/projected/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-kube-api-access-gd2kj\") pod \"octavia-persistence-db-create-scftb\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.466306 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-operator-scripts\") pod \"octavia-persistence-db-create-scftb\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.568543 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-operator-scripts\") pod \"octavia-persistence-db-create-scftb\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.569687 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-operator-scripts\") pod \"octavia-persistence-db-create-scftb\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.570982 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd2kj\" (UniqueName: \"kubernetes.io/projected/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-kube-api-access-gd2kj\") pod \"octavia-persistence-db-create-scftb\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.591023 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd2kj\" (UniqueName: \"kubernetes.io/projected/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-kube-api-access-gd2kj\") pod \"octavia-persistence-db-create-scftb\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:42 crc kubenswrapper[4931]: I0130 06:42:42.690892 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.188534 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-scftb"] Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.543273 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-06bb-account-create-update-9w2dc"] Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.544741 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.557037 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.559621 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-06bb-account-create-update-9w2dc"] Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.593297 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2k85\" (UniqueName: \"kubernetes.io/projected/cffbc623-924e-4952-890f-da78398d60fb-kube-api-access-g2k85\") pod \"octavia-06bb-account-create-update-9w2dc\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.593435 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cffbc623-924e-4952-890f-da78398d60fb-operator-scripts\") pod \"octavia-06bb-account-create-update-9w2dc\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.674607 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-scftb" event={"ID":"7c32b952-ea20-4e38-be3d-0ca833fb8aaf","Type":"ContainerStarted","Data":"e1e6f3089c65c6555f71402d2087406c6daee75232130f6c5b8762d180358f01"} Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.674657 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-scftb" event={"ID":"7c32b952-ea20-4e38-be3d-0ca833fb8aaf","Type":"ContainerStarted","Data":"7d096987e285866c77fbd01b68c7f6ac747b1c7e0268d9f81a6b6350b6a68d69"} Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.695672 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2k85\" (UniqueName: \"kubernetes.io/projected/cffbc623-924e-4952-890f-da78398d60fb-kube-api-access-g2k85\") pod \"octavia-06bb-account-create-update-9w2dc\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.695739 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cffbc623-924e-4952-890f-da78398d60fb-operator-scripts\") pod \"octavia-06bb-account-create-update-9w2dc\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.697325 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-persistence-db-create-scftb" podStartSLOduration=1.697306749 podStartE2EDuration="1.697306749s" podCreationTimestamp="2026-01-30 06:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:42:43.696733963 +0000 UTC m=+5699.066644220" watchObservedRunningTime="2026-01-30 06:42:43.697306749 +0000 UTC m=+5699.067217006" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.697770 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cffbc623-924e-4952-890f-da78398d60fb-operator-scripts\") pod \"octavia-06bb-account-create-update-9w2dc\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.715861 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2k85\" (UniqueName: \"kubernetes.io/projected/cffbc623-924e-4952-890f-da78398d60fb-kube-api-access-g2k85\") pod \"octavia-06bb-account-create-update-9w2dc\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:43 crc kubenswrapper[4931]: I0130 06:42:43.866604 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:44 crc kubenswrapper[4931]: I0130 06:42:44.376698 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-06bb-account-create-update-9w2dc"] Jan 30 06:42:44 crc kubenswrapper[4931]: I0130 06:42:44.687780 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-06bb-account-create-update-9w2dc" event={"ID":"cffbc623-924e-4952-890f-da78398d60fb","Type":"ContainerStarted","Data":"530188cbe277ff66c92e88116244fb7e483ee43f7966980292a116725f942bcd"} Jan 30 06:42:44 crc kubenswrapper[4931]: I0130 06:42:44.688050 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-06bb-account-create-update-9w2dc" event={"ID":"cffbc623-924e-4952-890f-da78398d60fb","Type":"ContainerStarted","Data":"c14c6caddb40b4927eb0e087bda428a9231bf8741a027a2476743c80e0b8dcda"} Jan 30 06:42:44 crc kubenswrapper[4931]: I0130 06:42:44.690263 4931 generic.go:334] "Generic (PLEG): container finished" podID="7c32b952-ea20-4e38-be3d-0ca833fb8aaf" containerID="e1e6f3089c65c6555f71402d2087406c6daee75232130f6c5b8762d180358f01" exitCode=0 Jan 30 06:42:44 crc kubenswrapper[4931]: I0130 06:42:44.690286 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-scftb" event={"ID":"7c32b952-ea20-4e38-be3d-0ca833fb8aaf","Type":"ContainerDied","Data":"e1e6f3089c65c6555f71402d2087406c6daee75232130f6c5b8762d180358f01"} Jan 30 06:42:44 crc kubenswrapper[4931]: I0130 06:42:44.710570 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-06bb-account-create-update-9w2dc" podStartSLOduration=1.7105528460000001 podStartE2EDuration="1.710552846s" podCreationTimestamp="2026-01-30 06:42:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:42:44.704932709 +0000 UTC m=+5700.074842966" watchObservedRunningTime="2026-01-30 06:42:44.710552846 +0000 UTC m=+5700.080463103" Jan 30 06:42:45 crc kubenswrapper[4931]: I0130 06:42:45.701837 4931 generic.go:334] "Generic (PLEG): container finished" podID="cffbc623-924e-4952-890f-da78398d60fb" containerID="530188cbe277ff66c92e88116244fb7e483ee43f7966980292a116725f942bcd" exitCode=0 Jan 30 06:42:45 crc kubenswrapper[4931]: I0130 06:42:45.702302 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-06bb-account-create-update-9w2dc" event={"ID":"cffbc623-924e-4952-890f-da78398d60fb","Type":"ContainerDied","Data":"530188cbe277ff66c92e88116244fb7e483ee43f7966980292a116725f942bcd"} Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.243108 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.342942 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-operator-scripts\") pod \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.343025 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd2kj\" (UniqueName: \"kubernetes.io/projected/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-kube-api-access-gd2kj\") pod \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\" (UID: \"7c32b952-ea20-4e38-be3d-0ca833fb8aaf\") " Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.343456 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c32b952-ea20-4e38-be3d-0ca833fb8aaf" (UID: "7c32b952-ea20-4e38-be3d-0ca833fb8aaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.348233 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-kube-api-access-gd2kj" (OuterVolumeSpecName: "kube-api-access-gd2kj") pod "7c32b952-ea20-4e38-be3d-0ca833fb8aaf" (UID: "7c32b952-ea20-4e38-be3d-0ca833fb8aaf"). InnerVolumeSpecName "kube-api-access-gd2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.445873 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.445925 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd2kj\" (UniqueName: \"kubernetes.io/projected/7c32b952-ea20-4e38-be3d-0ca833fb8aaf-kube-api-access-gd2kj\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.714386 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-scftb" Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.714391 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-scftb" event={"ID":"7c32b952-ea20-4e38-be3d-0ca833fb8aaf","Type":"ContainerDied","Data":"7d096987e285866c77fbd01b68c7f6ac747b1c7e0268d9f81a6b6350b6a68d69"} Jan 30 06:42:46 crc kubenswrapper[4931]: I0130 06:42:46.714477 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d096987e285866c77fbd01b68c7f6ac747b1c7e0268d9f81a6b6350b6a68d69" Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.046485 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.168598 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cffbc623-924e-4952-890f-da78398d60fb-operator-scripts\") pod \"cffbc623-924e-4952-890f-da78398d60fb\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.168888 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2k85\" (UniqueName: \"kubernetes.io/projected/cffbc623-924e-4952-890f-da78398d60fb-kube-api-access-g2k85\") pod \"cffbc623-924e-4952-890f-da78398d60fb\" (UID: \"cffbc623-924e-4952-890f-da78398d60fb\") " Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.169145 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cffbc623-924e-4952-890f-da78398d60fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cffbc623-924e-4952-890f-da78398d60fb" (UID: "cffbc623-924e-4952-890f-da78398d60fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.169562 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cffbc623-924e-4952-890f-da78398d60fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.175399 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffbc623-924e-4952-890f-da78398d60fb-kube-api-access-g2k85" (OuterVolumeSpecName: "kube-api-access-g2k85") pod "cffbc623-924e-4952-890f-da78398d60fb" (UID: "cffbc623-924e-4952-890f-da78398d60fb"). InnerVolumeSpecName "kube-api-access-g2k85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.271206 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2k85\" (UniqueName: \"kubernetes.io/projected/cffbc623-924e-4952-890f-da78398d60fb-kube-api-access-g2k85\") on node \"crc\" DevicePath \"\"" Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.746083 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-06bb-account-create-update-9w2dc" event={"ID":"cffbc623-924e-4952-890f-da78398d60fb","Type":"ContainerDied","Data":"c14c6caddb40b4927eb0e087bda428a9231bf8741a027a2476743c80e0b8dcda"} Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.746123 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c14c6caddb40b4927eb0e087bda428a9231bf8741a027a2476743c80e0b8dcda" Jan 30 06:42:47 crc kubenswrapper[4931]: I0130 06:42:47.746175 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-06bb-account-create-update-9w2dc" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.237832 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-79684d7c94-4r69m"] Jan 30 06:42:49 crc kubenswrapper[4931]: E0130 06:42:49.238467 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffbc623-924e-4952-890f-da78398d60fb" containerName="mariadb-account-create-update" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.238478 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffbc623-924e-4952-890f-da78398d60fb" containerName="mariadb-account-create-update" Jan 30 06:42:49 crc kubenswrapper[4931]: E0130 06:42:49.238494 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c32b952-ea20-4e38-be3d-0ca833fb8aaf" containerName="mariadb-database-create" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.238500 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c32b952-ea20-4e38-be3d-0ca833fb8aaf" containerName="mariadb-database-create" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.238695 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffbc623-924e-4952-890f-da78398d60fb" containerName="mariadb-account-create-update" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.238711 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c32b952-ea20-4e38-be3d-0ca833fb8aaf" containerName="mariadb-database-create" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.239968 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.241757 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-8cr2b" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.242676 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.243184 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.259885 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-79684d7c94-4r69m"] Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.414332 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-combined-ca-bundle\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.414519 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-octavia-run\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.414786 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-config-data-merged\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.414885 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-config-data\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.414994 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-scripts\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.516260 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-scripts\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.516347 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-combined-ca-bundle\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.516405 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-octavia-run\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.516540 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-config-data-merged\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.516595 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-config-data\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.517266 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-config-data-merged\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.517607 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-octavia-run\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.523841 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-combined-ca-bundle\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.534571 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-scripts\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.535652 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395-config-data\") pod \"octavia-api-79684d7c94-4r69m\" (UID: \"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395\") " pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:49 crc kubenswrapper[4931]: I0130 06:42:49.561349 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:42:50 crc kubenswrapper[4931]: I0130 06:42:50.079999 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-79684d7c94-4r69m"] Jan 30 06:42:50 crc kubenswrapper[4931]: I0130 06:42:50.422259 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:42:50 crc kubenswrapper[4931]: E0130 06:42:50.422583 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:42:50 crc kubenswrapper[4931]: I0130 06:42:50.775377 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-79684d7c94-4r69m" event={"ID":"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395","Type":"ContainerStarted","Data":"f5467262858899783db67c1a783f82564094e34125fc6828b0ef571352e14f0a"} Jan 30 06:42:51 crc kubenswrapper[4931]: I0130 06:42:51.150808 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dj4rz"] Jan 30 06:42:51 crc kubenswrapper[4931]: I0130 06:42:51.166810 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dj4rz"] Jan 30 06:42:51 crc kubenswrapper[4931]: I0130 06:42:51.439036 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e6e702-ef29-49bd-836a-f46b2abd51cc" path="/var/lib/kubelet/pods/26e6e702-ef29-49bd-836a-f46b2abd51cc/volumes" Jan 30 06:42:54 crc kubenswrapper[4931]: I0130 06:42:54.979075 4931 scope.go:117] "RemoveContainer" containerID="4c48b12d2b0648e9dfb8706d20edad9baaeb7875a958059b6dc53ace27e47c1e" Jan 30 06:43:00 crc kubenswrapper[4931]: I0130 06:43:00.402207 4931 scope.go:117] "RemoveContainer" containerID="cdd69294edc054ababcd2665bc64a680aa08a0804663ff331148d5c4aedf9140" Jan 30 06:43:00 crc kubenswrapper[4931]: I0130 06:43:00.457551 4931 scope.go:117] "RemoveContainer" containerID="cbeb8eeb50114e9f7323f10c636e65f223a4a4ce3cb8536dd62a68b32d0fcd46" Jan 30 06:43:00 crc kubenswrapper[4931]: I0130 06:43:00.485831 4931 scope.go:117] "RemoveContainer" containerID="a4061e639e286e3a321d0a950315a3048946e43d437d1b9673f6d152b515bf12" Jan 30 06:43:00 crc kubenswrapper[4931]: I0130 06:43:00.904611 4931 generic.go:334] "Generic (PLEG): container finished" podID="eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395" containerID="af4f8c24491864d362f08c260d7724ee10d7788a36dc391894542a043ed83880" exitCode=0 Jan 30 06:43:00 crc kubenswrapper[4931]: I0130 06:43:00.904665 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-79684d7c94-4r69m" event={"ID":"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395","Type":"ContainerDied","Data":"af4f8c24491864d362f08c260d7724ee10d7788a36dc391894542a043ed83880"} Jan 30 06:43:01 crc kubenswrapper[4931]: I0130 06:43:01.924873 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-79684d7c94-4r69m" event={"ID":"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395","Type":"ContainerStarted","Data":"472b256dc43e23ba13a9c133b6ae84fb385d090c66373de1f283ad92d133dbfd"} Jan 30 06:43:01 crc kubenswrapper[4931]: I0130 06:43:01.925346 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:43:01 crc kubenswrapper[4931]: I0130 06:43:01.925360 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-79684d7c94-4r69m" event={"ID":"eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395","Type":"ContainerStarted","Data":"c169ceba7e478d663c2966aefcc21ea691f8d7a9b6b4e66c4a90e69171df77cb"} Jan 30 06:43:01 crc kubenswrapper[4931]: I0130 06:43:01.925378 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:43:01 crc kubenswrapper[4931]: I0130 06:43:01.962070 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-79684d7c94-4r69m" podStartSLOduration=2.564466702 podStartE2EDuration="12.962049505s" podCreationTimestamp="2026-01-30 06:42:49 +0000 UTC" firstStartedPulling="2026-01-30 06:42:50.090034722 +0000 UTC m=+5705.459944979" lastFinishedPulling="2026-01-30 06:43:00.487617525 +0000 UTC m=+5715.857527782" observedRunningTime="2026-01-30 06:43:01.957467607 +0000 UTC m=+5717.327377894" watchObservedRunningTime="2026-01-30 06:43:01.962049505 +0000 UTC m=+5717.331959782" Jan 30 06:43:04 crc kubenswrapper[4931]: I0130 06:43:04.422013 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:43:04 crc kubenswrapper[4931]: E0130 06:43:04.423033 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.784382 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-g99g6"] Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.786024 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.792689 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.792970 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.793052 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.800808 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-g99g6"] Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.930783 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c16935c-c83b-4b45-b4cd-b61f20ee764f-hm-ports\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.931244 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c16935c-c83b-4b45-b4cd-b61f20ee764f-scripts\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.931369 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c16935c-c83b-4b45-b4cd-b61f20ee764f-config-data-merged\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:06 crc kubenswrapper[4931]: I0130 06:43:06.931392 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c16935c-c83b-4b45-b4cd-b61f20ee764f-config-data\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.032899 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c16935c-c83b-4b45-b4cd-b61f20ee764f-hm-ports\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.032991 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c16935c-c83b-4b45-b4cd-b61f20ee764f-scripts\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.033050 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c16935c-c83b-4b45-b4cd-b61f20ee764f-config-data-merged\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.033067 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c16935c-c83b-4b45-b4cd-b61f20ee764f-config-data\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.033548 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c16935c-c83b-4b45-b4cd-b61f20ee764f-config-data-merged\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.033710 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c16935c-c83b-4b45-b4cd-b61f20ee764f-hm-ports\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.039146 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c16935c-c83b-4b45-b4cd-b61f20ee764f-scripts\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.047908 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c16935c-c83b-4b45-b4cd-b61f20ee764f-config-data\") pod \"octavia-rsyslog-g99g6\" (UID: \"2c16935c-c83b-4b45-b4cd-b61f20ee764f\") " pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.104594 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.492078 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-6zvwf"] Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.494791 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.497273 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.514022 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-6zvwf"] Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.646274 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3758e44c-007e-45da-87b8-ed2fdeb3e23c-httpd-config\") pod \"octavia-image-upload-65dd99cb46-6zvwf\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.646344 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3758e44c-007e-45da-87b8-ed2fdeb3e23c-amphora-image\") pod \"octavia-image-upload-65dd99cb46-6zvwf\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.676213 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-g99g6"] Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.748767 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3758e44c-007e-45da-87b8-ed2fdeb3e23c-httpd-config\") pod \"octavia-image-upload-65dd99cb46-6zvwf\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.748846 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3758e44c-007e-45da-87b8-ed2fdeb3e23c-amphora-image\") pod \"octavia-image-upload-65dd99cb46-6zvwf\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.749354 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3758e44c-007e-45da-87b8-ed2fdeb3e23c-amphora-image\") pod \"octavia-image-upload-65dd99cb46-6zvwf\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.754859 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3758e44c-007e-45da-87b8-ed2fdeb3e23c-httpd-config\") pod \"octavia-image-upload-65dd99cb46-6zvwf\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.830251 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:07 crc kubenswrapper[4931]: I0130 06:43:07.904105 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-g99g6"] Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.034781 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-g99g6" event={"ID":"2c16935c-c83b-4b45-b4cd-b61f20ee764f","Type":"ContainerStarted","Data":"03c44fb8c53413e83f41c5315c903df78cd0125b4779c5ce07049143f187fe0e"} Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.335639 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-6zvwf"] Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.416921 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cgvfd" podUID="9bf15e4b-1a09-401b-87e9-97cff0ee8c91" containerName="ovn-controller" probeResult="failure" output=< Jan 30 06:43:08 crc kubenswrapper[4931]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 06:43:08 crc kubenswrapper[4931]: > Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.420253 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.437305 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5tgfr" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.529563 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cgvfd-config-w2szm"] Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.530876 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.532973 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.539602 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cgvfd-config-w2szm"] Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.664553 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-log-ovn\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.664670 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-scripts\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.664707 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.664757 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-additional-scripts\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.664798 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7qvh\" (UniqueName: \"kubernetes.io/projected/2f8082cd-2af8-4181-9d8f-73436fea45bc-kube-api-access-x7qvh\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.664817 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run-ovn\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.766027 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-additional-scripts\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.766156 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7qvh\" (UniqueName: \"kubernetes.io/projected/2f8082cd-2af8-4181-9d8f-73436fea45bc-kube-api-access-x7qvh\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.766195 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run-ovn\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.766948 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run-ovn\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.767127 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-log-ovn\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.767221 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-log-ovn\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.767238 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-additional-scripts\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.767371 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-scripts\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.767699 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.767826 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.774138 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-scripts\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.795157 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7qvh\" (UniqueName: \"kubernetes.io/projected/2f8082cd-2af8-4181-9d8f-73436fea45bc-kube-api-access-x7qvh\") pod \"ovn-controller-cgvfd-config-w2szm\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:08 crc kubenswrapper[4931]: I0130 06:43:08.859319 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:09 crc kubenswrapper[4931]: I0130 06:43:09.058247 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" event={"ID":"3758e44c-007e-45da-87b8-ed2fdeb3e23c","Type":"ContainerStarted","Data":"02c7a36d17a999021624a691df6d0451e8418cc04c05cd38164901646377f2ae"} Jan 30 06:43:09 crc kubenswrapper[4931]: I0130 06:43:09.372190 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cgvfd-config-w2szm"] Jan 30 06:43:09 crc kubenswrapper[4931]: W0130 06:43:09.378863 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f8082cd_2af8_4181_9d8f_73436fea45bc.slice/crio-a2f0e03cf5afdfffd45d581ccaf50b549e6e76e484005e16bc912ac74f8f09c8 WatchSource:0}: Error finding container a2f0e03cf5afdfffd45d581ccaf50b549e6e76e484005e16bc912ac74f8f09c8: Status 404 returned error can't find the container with id a2f0e03cf5afdfffd45d581ccaf50b549e6e76e484005e16bc912ac74f8f09c8 Jan 30 06:43:10 crc kubenswrapper[4931]: I0130 06:43:10.070879 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cgvfd-config-w2szm" event={"ID":"2f8082cd-2af8-4181-9d8f-73436fea45bc","Type":"ContainerStarted","Data":"3f61c0f02fd2c2ced024d1e703805185ca2f7ee2e42863e0e8a06a4f812766d2"} Jan 30 06:43:10 crc kubenswrapper[4931]: I0130 06:43:10.076756 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cgvfd-config-w2szm" event={"ID":"2f8082cd-2af8-4181-9d8f-73436fea45bc","Type":"ContainerStarted","Data":"a2f0e03cf5afdfffd45d581ccaf50b549e6e76e484005e16bc912ac74f8f09c8"} Jan 30 06:43:10 crc kubenswrapper[4931]: I0130 06:43:10.097086 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cgvfd-config-w2szm" podStartSLOduration=2.097069139 podStartE2EDuration="2.097069139s" podCreationTimestamp="2026-01-30 06:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:43:10.089643051 +0000 UTC m=+5725.459553308" watchObservedRunningTime="2026-01-30 06:43:10.097069139 +0000 UTC m=+5725.466979396" Jan 30 06:43:11 crc kubenswrapper[4931]: I0130 06:43:11.082777 4931 generic.go:334] "Generic (PLEG): container finished" podID="2f8082cd-2af8-4181-9d8f-73436fea45bc" containerID="3f61c0f02fd2c2ced024d1e703805185ca2f7ee2e42863e0e8a06a4f812766d2" exitCode=0 Jan 30 06:43:11 crc kubenswrapper[4931]: I0130 06:43:11.082893 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cgvfd-config-w2szm" event={"ID":"2f8082cd-2af8-4181-9d8f-73436fea45bc","Type":"ContainerDied","Data":"3f61c0f02fd2c2ced024d1e703805185ca2f7ee2e42863e0e8a06a4f812766d2"} Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.706571 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.870687 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-scripts\") pod \"2f8082cd-2af8-4181-9d8f-73436fea45bc\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.870788 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-additional-scripts\") pod \"2f8082cd-2af8-4181-9d8f-73436fea45bc\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.871832 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-scripts" (OuterVolumeSpecName: "scripts") pod "2f8082cd-2af8-4181-9d8f-73436fea45bc" (UID: "2f8082cd-2af8-4181-9d8f-73436fea45bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.871847 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2f8082cd-2af8-4181-9d8f-73436fea45bc" (UID: "2f8082cd-2af8-4181-9d8f-73436fea45bc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.871949 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7qvh\" (UniqueName: \"kubernetes.io/projected/2f8082cd-2af8-4181-9d8f-73436fea45bc-kube-api-access-x7qvh\") pod \"2f8082cd-2af8-4181-9d8f-73436fea45bc\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.872853 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run-ovn\") pod \"2f8082cd-2af8-4181-9d8f-73436fea45bc\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.872882 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run\") pod \"2f8082cd-2af8-4181-9d8f-73436fea45bc\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.872903 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-log-ovn\") pod \"2f8082cd-2af8-4181-9d8f-73436fea45bc\" (UID: \"2f8082cd-2af8-4181-9d8f-73436fea45bc\") " Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.873231 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run" (OuterVolumeSpecName: "var-run") pod "2f8082cd-2af8-4181-9d8f-73436fea45bc" (UID: "2f8082cd-2af8-4181-9d8f-73436fea45bc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.873262 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2f8082cd-2af8-4181-9d8f-73436fea45bc" (UID: "2f8082cd-2af8-4181-9d8f-73436fea45bc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.873293 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2f8082cd-2af8-4181-9d8f-73436fea45bc" (UID: "2f8082cd-2af8-4181-9d8f-73436fea45bc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.873347 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.873367 4931 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f8082cd-2af8-4181-9d8f-73436fea45bc-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.873383 4931 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.873395 4931 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.886702 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8082cd-2af8-4181-9d8f-73436fea45bc-kube-api-access-x7qvh" (OuterVolumeSpecName: "kube-api-access-x7qvh") pod "2f8082cd-2af8-4181-9d8f-73436fea45bc" (UID: "2f8082cd-2af8-4181-9d8f-73436fea45bc"). InnerVolumeSpecName "kube-api-access-x7qvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.975849 4931 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f8082cd-2af8-4181-9d8f-73436fea45bc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:12 crc kubenswrapper[4931]: I0130 06:43:12.975897 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7qvh\" (UniqueName: \"kubernetes.io/projected/2f8082cd-2af8-4181-9d8f-73436fea45bc-kube-api-access-x7qvh\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.103762 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-g99g6" event={"ID":"2c16935c-c83b-4b45-b4cd-b61f20ee764f","Type":"ContainerStarted","Data":"c3925675d16d4559e27133114b0b918739304e4ebddd62d74c52e33ae5f25e71"} Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.106992 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cgvfd-config-w2szm" event={"ID":"2f8082cd-2af8-4181-9d8f-73436fea45bc","Type":"ContainerDied","Data":"a2f0e03cf5afdfffd45d581ccaf50b549e6e76e484005e16bc912ac74f8f09c8"} Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.107030 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f0e03cf5afdfffd45d581ccaf50b549e6e76e484005e16bc912ac74f8f09c8" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.107078 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cgvfd-config-w2szm" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.171205 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cgvfd-config-w2szm"] Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.187568 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cgvfd-config-w2szm"] Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.314537 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-2clsb"] Jan 30 06:43:13 crc kubenswrapper[4931]: E0130 06:43:13.315002 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8082cd-2af8-4181-9d8f-73436fea45bc" containerName="ovn-config" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.315021 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8082cd-2af8-4181-9d8f-73436fea45bc" containerName="ovn-config" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.315201 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8082cd-2af8-4181-9d8f-73436fea45bc" containerName="ovn-config" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.316221 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.318065 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.323406 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-2clsb"] Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.384550 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data-merged\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.384650 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.385637 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-scripts\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.385663 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-combined-ca-bundle\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.408914 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cgvfd" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.437164 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8082cd-2af8-4181-9d8f-73436fea45bc" path="/var/lib/kubelet/pods/2f8082cd-2af8-4181-9d8f-73436fea45bc/volumes" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.487870 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-scripts\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.487915 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-combined-ca-bundle\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.488071 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data-merged\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.488142 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.489766 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data-merged\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.493530 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-scripts\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.493631 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.507988 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-combined-ca-bundle\") pod \"octavia-db-sync-2clsb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:13 crc kubenswrapper[4931]: I0130 06:43:13.642965 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:14 crc kubenswrapper[4931]: I0130 06:43:14.128244 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-2clsb"] Jan 30 06:43:14 crc kubenswrapper[4931]: W0130 06:43:14.135193 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7503f1_c8e7_4b48_9dad_a4a221ebdbbb.slice/crio-ae7cb9704c5cb2662c592a30651057ff3bbb6f8f549e1ae4f1cbae5155be4ce3 WatchSource:0}: Error finding container ae7cb9704c5cb2662c592a30651057ff3bbb6f8f549e1ae4f1cbae5155be4ce3: Status 404 returned error can't find the container with id ae7cb9704c5cb2662c592a30651057ff3bbb6f8f549e1ae4f1cbae5155be4ce3 Jan 30 06:43:15 crc kubenswrapper[4931]: I0130 06:43:15.143189 4931 generic.go:334] "Generic (PLEG): container finished" podID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" containerID="c4a34d96918d76961993d44bfa88235147b0d786fa8c13e4e74a51d5c91d0e97" exitCode=0 Jan 30 06:43:15 crc kubenswrapper[4931]: I0130 06:43:15.143617 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-2clsb" event={"ID":"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb","Type":"ContainerDied","Data":"c4a34d96918d76961993d44bfa88235147b0d786fa8c13e4e74a51d5c91d0e97"} Jan 30 06:43:15 crc kubenswrapper[4931]: I0130 06:43:15.143645 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-2clsb" event={"ID":"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb","Type":"ContainerStarted","Data":"ae7cb9704c5cb2662c592a30651057ff3bbb6f8f549e1ae4f1cbae5155be4ce3"} Jan 30 06:43:15 crc kubenswrapper[4931]: I0130 06:43:15.153677 4931 generic.go:334] "Generic (PLEG): container finished" podID="2c16935c-c83b-4b45-b4cd-b61f20ee764f" containerID="c3925675d16d4559e27133114b0b918739304e4ebddd62d74c52e33ae5f25e71" exitCode=0 Jan 30 06:43:15 crc kubenswrapper[4931]: I0130 06:43:15.153731 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-g99g6" event={"ID":"2c16935c-c83b-4b45-b4cd-b61f20ee764f","Type":"ContainerDied","Data":"c3925675d16d4559e27133114b0b918739304e4ebddd62d74c52e33ae5f25e71"} Jan 30 06:43:16 crc kubenswrapper[4931]: I0130 06:43:16.423107 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:43:16 crc kubenswrapper[4931]: E0130 06:43:16.423660 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:43:19 crc kubenswrapper[4931]: I0130 06:43:19.192186 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" event={"ID":"3758e44c-007e-45da-87b8-ed2fdeb3e23c","Type":"ContainerStarted","Data":"f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da"} Jan 30 06:43:19 crc kubenswrapper[4931]: I0130 06:43:19.196199 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-2clsb" event={"ID":"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb","Type":"ContainerStarted","Data":"870048acdaf12d315c0defcb750ddf7659f2ce415320432c25b469719ef6d1d5"} Jan 30 06:43:19 crc kubenswrapper[4931]: I0130 06:43:19.235204 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-2clsb" podStartSLOduration=6.235183668 podStartE2EDuration="6.235183668s" podCreationTimestamp="2026-01-30 06:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:43:19.228438209 +0000 UTC m=+5734.598348466" watchObservedRunningTime="2026-01-30 06:43:19.235183668 +0000 UTC m=+5734.605093925" Jan 30 06:43:20 crc kubenswrapper[4931]: I0130 06:43:20.222852 4931 generic.go:334] "Generic (PLEG): container finished" podID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerID="f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da" exitCode=0 Jan 30 06:43:20 crc kubenswrapper[4931]: I0130 06:43:20.222921 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" event={"ID":"3758e44c-007e-45da-87b8-ed2fdeb3e23c","Type":"ContainerDied","Data":"f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da"} Jan 30 06:43:21 crc kubenswrapper[4931]: I0130 06:43:21.239753 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" event={"ID":"3758e44c-007e-45da-87b8-ed2fdeb3e23c","Type":"ContainerStarted","Data":"929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7"} Jan 30 06:43:21 crc kubenswrapper[4931]: I0130 06:43:21.246968 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-g99g6" event={"ID":"2c16935c-c83b-4b45-b4cd-b61f20ee764f","Type":"ContainerStarted","Data":"2772eecf4cd1ca5855b05ddd85c6c56124106ff7213c37304726e3e201063cd6"} Jan 30 06:43:21 crc kubenswrapper[4931]: I0130 06:43:21.248001 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:21 crc kubenswrapper[4931]: I0130 06:43:21.278312 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" podStartSLOduration=3.7254788899999998 podStartE2EDuration="14.278285615s" podCreationTimestamp="2026-01-30 06:43:07 +0000 UTC" firstStartedPulling="2026-01-30 06:43:08.364270993 +0000 UTC m=+5723.734181250" lastFinishedPulling="2026-01-30 06:43:18.917077718 +0000 UTC m=+5734.286987975" observedRunningTime="2026-01-30 06:43:21.269698375 +0000 UTC m=+5736.639608672" watchObservedRunningTime="2026-01-30 06:43:21.278285615 +0000 UTC m=+5736.648195902" Jan 30 06:43:23 crc kubenswrapper[4931]: I0130 06:43:23.367492 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:43:23 crc kubenswrapper[4931]: I0130 06:43:23.417361 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-g99g6" podStartSLOduration=4.877635547 podStartE2EDuration="17.417340828s" podCreationTimestamp="2026-01-30 06:43:06 +0000 UTC" firstStartedPulling="2026-01-30 06:43:07.684800614 +0000 UTC m=+5723.054710871" lastFinishedPulling="2026-01-30 06:43:20.224505865 +0000 UTC m=+5735.594416152" observedRunningTime="2026-01-30 06:43:21.313672545 +0000 UTC m=+5736.683582822" watchObservedRunningTime="2026-01-30 06:43:23.417340828 +0000 UTC m=+5738.787251095" Jan 30 06:43:23 crc kubenswrapper[4931]: I0130 06:43:23.652113 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-79684d7c94-4r69m" Jan 30 06:43:25 crc kubenswrapper[4931]: I0130 06:43:25.338286 4931 generic.go:334] "Generic (PLEG): container finished" podID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" containerID="870048acdaf12d315c0defcb750ddf7659f2ce415320432c25b469719ef6d1d5" exitCode=0 Jan 30 06:43:25 crc kubenswrapper[4931]: I0130 06:43:25.338714 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-2clsb" event={"ID":"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb","Type":"ContainerDied","Data":"870048acdaf12d315c0defcb750ddf7659f2ce415320432c25b469719ef6d1d5"} Jan 30 06:43:26 crc kubenswrapper[4931]: I0130 06:43:26.820225 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:26 crc kubenswrapper[4931]: I0130 06:43:26.981217 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-combined-ca-bundle\") pod \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " Jan 30 06:43:26 crc kubenswrapper[4931]: I0130 06:43:26.981305 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data\") pod \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " Jan 30 06:43:26 crc kubenswrapper[4931]: I0130 06:43:26.981362 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-scripts\") pod \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " Jan 30 06:43:26 crc kubenswrapper[4931]: I0130 06:43:26.981482 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data-merged\") pod \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\" (UID: \"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb\") " Jan 30 06:43:26 crc kubenswrapper[4931]: I0130 06:43:26.986609 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data" (OuterVolumeSpecName: "config-data") pod "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" (UID: "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:43:26 crc kubenswrapper[4931]: I0130 06:43:26.991622 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-scripts" (OuterVolumeSpecName: "scripts") pod "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" (UID: "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.005396 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" (UID: "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.005462 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" (UID: "6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.084244 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.084272 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.084281 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.084289 4931 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.363554 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-2clsb" event={"ID":"6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb","Type":"ContainerDied","Data":"ae7cb9704c5cb2662c592a30651057ff3bbb6f8f549e1ae4f1cbae5155be4ce3"} Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.363601 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae7cb9704c5cb2662c592a30651057ff3bbb6f8f549e1ae4f1cbae5155be4ce3" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.363629 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-2clsb" Jan 30 06:43:27 crc kubenswrapper[4931]: I0130 06:43:27.422782 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:43:27 crc kubenswrapper[4931]: E0130 06:43:27.423246 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:43:37 crc kubenswrapper[4931]: I0130 06:43:37.166721 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-g99g6" Jan 30 06:43:39 crc kubenswrapper[4931]: I0130 06:43:39.422746 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:43:39 crc kubenswrapper[4931]: E0130 06:43:39.423525 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:43:50 crc kubenswrapper[4931]: I0130 06:43:50.764470 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-6zvwf"] Jan 30 06:43:50 crc kubenswrapper[4931]: I0130 06:43:50.765301 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" podUID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerName="octavia-amphora-httpd" containerID="cri-o://929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7" gracePeriod=30 Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.313091 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.476840 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3758e44c-007e-45da-87b8-ed2fdeb3e23c-httpd-config\") pod \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.476985 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3758e44c-007e-45da-87b8-ed2fdeb3e23c-amphora-image\") pod \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\" (UID: \"3758e44c-007e-45da-87b8-ed2fdeb3e23c\") " Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.509005 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3758e44c-007e-45da-87b8-ed2fdeb3e23c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3758e44c-007e-45da-87b8-ed2fdeb3e23c" (UID: "3758e44c-007e-45da-87b8-ed2fdeb3e23c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.570236 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3758e44c-007e-45da-87b8-ed2fdeb3e23c-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "3758e44c-007e-45da-87b8-ed2fdeb3e23c" (UID: "3758e44c-007e-45da-87b8-ed2fdeb3e23c"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.579071 4931 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3758e44c-007e-45da-87b8-ed2fdeb3e23c-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.579126 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3758e44c-007e-45da-87b8-ed2fdeb3e23c-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.648839 4931 generic.go:334] "Generic (PLEG): container finished" podID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerID="929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7" exitCode=0 Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.648885 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.648949 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" event={"ID":"3758e44c-007e-45da-87b8-ed2fdeb3e23c","Type":"ContainerDied","Data":"929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7"} Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.649312 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-65dd99cb46-6zvwf" event={"ID":"3758e44c-007e-45da-87b8-ed2fdeb3e23c","Type":"ContainerDied","Data":"02c7a36d17a999021624a691df6d0451e8418cc04c05cd38164901646377f2ae"} Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.649352 4931 scope.go:117] "RemoveContainer" containerID="929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.684496 4931 scope.go:117] "RemoveContainer" containerID="f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.684949 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-6zvwf"] Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.735203 4931 scope.go:117] "RemoveContainer" containerID="929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.735478 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-65dd99cb46-6zvwf"] Jan 30 06:43:51 crc kubenswrapper[4931]: E0130 06:43:51.735724 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7\": container with ID starting with 929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7 not found: ID does not exist" containerID="929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.735766 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7"} err="failed to get container status \"929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7\": rpc error: code = NotFound desc = could not find container \"929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7\": container with ID starting with 929ac0196599bf56ecf437d08494c70ab46bf574987f0b56c3266610e5ada2d7 not found: ID does not exist" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.735820 4931 scope.go:117] "RemoveContainer" containerID="f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da" Jan 30 06:43:51 crc kubenswrapper[4931]: E0130 06:43:51.736147 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da\": container with ID starting with f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da not found: ID does not exist" containerID="f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da" Jan 30 06:43:51 crc kubenswrapper[4931]: I0130 06:43:51.736182 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da"} err="failed to get container status \"f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da\": rpc error: code = NotFound desc = could not find container \"f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da\": container with ID starting with f42ddf8c49c2965abc82a7ab03416bcc67ad1377c7d9c720d14d82b166cad6da not found: ID does not exist" Jan 30 06:43:53 crc kubenswrapper[4931]: I0130 06:43:53.424195 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:43:53 crc kubenswrapper[4931]: E0130 06:43:53.425028 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:43:53 crc kubenswrapper[4931]: I0130 06:43:53.439851 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" path="/var/lib/kubelet/pods/3758e44c-007e-45da-87b8-ed2fdeb3e23c/volumes" Jan 30 06:44:07 crc kubenswrapper[4931]: I0130 06:44:07.422817 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:44:07 crc kubenswrapper[4931]: E0130 06:44:07.424195 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.158799 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-k6c7h"] Jan 30 06:44:12 crc kubenswrapper[4931]: E0130 06:44:12.160094 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerName="octavia-amphora-httpd" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.160114 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerName="octavia-amphora-httpd" Jan 30 06:44:12 crc kubenswrapper[4931]: E0130 06:44:12.160125 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" containerName="init" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.160133 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" containerName="init" Jan 30 06:44:12 crc kubenswrapper[4931]: E0130 06:44:12.160151 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerName="init" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.160159 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerName="init" Jan 30 06:44:12 crc kubenswrapper[4931]: E0130 06:44:12.160188 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" containerName="octavia-db-sync" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.160195 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" containerName="octavia-db-sync" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.160410 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3758e44c-007e-45da-87b8-ed2fdeb3e23c" containerName="octavia-amphora-httpd" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.160456 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" containerName="octavia-db-sync" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.161629 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.164996 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.165416 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.166618 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.190042 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-k6c7h"] Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.273654 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-amphora-certs\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.274350 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-combined-ca-bundle\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.274460 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c0bd14d-9378-4c91-87e8-4ec9681103e0-hm-ports\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.274641 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-config-data\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.275149 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-scripts\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.275247 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c0bd14d-9378-4c91-87e8-4ec9681103e0-config-data-merged\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.377123 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c0bd14d-9378-4c91-87e8-4ec9681103e0-hm-ports\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.377225 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-config-data\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.377292 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-scripts\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.377316 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c0bd14d-9378-4c91-87e8-4ec9681103e0-config-data-merged\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.377460 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-amphora-certs\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.377513 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-combined-ca-bundle\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.378694 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2c0bd14d-9378-4c91-87e8-4ec9681103e0-config-data-merged\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.379051 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/2c0bd14d-9378-4c91-87e8-4ec9681103e0-hm-ports\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.386836 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-scripts\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.388482 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-config-data\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.392363 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-amphora-certs\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.393636 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0bd14d-9378-4c91-87e8-4ec9681103e0-combined-ca-bundle\") pod \"octavia-healthmanager-k6c7h\" (UID: \"2c0bd14d-9378-4c91-87e8-4ec9681103e0\") " pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:12 crc kubenswrapper[4931]: I0130 06:44:12.487038 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:13 crc kubenswrapper[4931]: I0130 06:44:13.286790 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-k6c7h"] Jan 30 06:44:13 crc kubenswrapper[4931]: I0130 06:44:13.931471 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-k6c7h" event={"ID":"2c0bd14d-9378-4c91-87e8-4ec9681103e0","Type":"ContainerStarted","Data":"8a52de91951bbdcb31ccc68846d7cc777f1b38ff89e122b595ea35fd9a76b55b"} Jan 30 06:44:13 crc kubenswrapper[4931]: I0130 06:44:13.931865 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-k6c7h" event={"ID":"2c0bd14d-9378-4c91-87e8-4ec9681103e0","Type":"ContainerStarted","Data":"bbb5eb015173bf1da64fbbaa1f0106d5a0771083daa2da6603ec6d8fe9b198e1"} Jan 30 06:44:13 crc kubenswrapper[4931]: I0130 06:44:13.977695 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-fc9fv"] Jan 30 06:44:13 crc kubenswrapper[4931]: I0130 06:44:13.979704 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:13 crc kubenswrapper[4931]: I0130 06:44:13.983383 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 30 06:44:13 crc kubenswrapper[4931]: I0130 06:44:13.983685 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.008011 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-fc9fv"] Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.119769 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8e6a5234-c995-4b65-afb5-e59eedb65e7f-hm-ports\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.119857 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-combined-ca-bundle\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.119908 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-config-data\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.119963 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8e6a5234-c995-4b65-afb5-e59eedb65e7f-config-data-merged\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.120078 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-amphora-certs\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.120596 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-scripts\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.221714 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-scripts\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.221769 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8e6a5234-c995-4b65-afb5-e59eedb65e7f-hm-ports\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.221806 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-combined-ca-bundle\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.221844 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-config-data\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.221889 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8e6a5234-c995-4b65-afb5-e59eedb65e7f-config-data-merged\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.221934 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-amphora-certs\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.223110 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8e6a5234-c995-4b65-afb5-e59eedb65e7f-config-data-merged\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.223875 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8e6a5234-c995-4b65-afb5-e59eedb65e7f-hm-ports\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.227289 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-amphora-certs\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.227413 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-combined-ca-bundle\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.227618 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-config-data\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.236382 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6a5234-c995-4b65-afb5-e59eedb65e7f-scripts\") pod \"octavia-housekeeping-fc9fv\" (UID: \"8e6a5234-c995-4b65-afb5-e59eedb65e7f\") " pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.302117 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.873746 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-fc9fv"] Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.886087 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:44:14 crc kubenswrapper[4931]: I0130 06:44:14.941508 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-fc9fv" event={"ID":"8e6a5234-c995-4b65-afb5-e59eedb65e7f","Type":"ContainerStarted","Data":"6fb62f1608a0c2d4c77b4959ba80d11f43c4414d3987ec676a3507cf46ceb294"} Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.000267 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-68w9j"] Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.001874 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.005062 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.005271 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.027767 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-68w9j"] Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.075500 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-scripts\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.075845 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-amphora-certs\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.075878 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-config-data\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.076025 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5a81fa26-7f20-43ef-922e-a9e63ee73709-hm-ports\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.076069 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5a81fa26-7f20-43ef-922e-a9e63ee73709-config-data-merged\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.076287 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-combined-ca-bundle\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.178489 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5a81fa26-7f20-43ef-922e-a9e63ee73709-hm-ports\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.178555 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5a81fa26-7f20-43ef-922e-a9e63ee73709-config-data-merged\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.178611 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-combined-ca-bundle\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.178664 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-scripts\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.178693 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-amphora-certs\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.178731 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-config-data\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.179203 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5a81fa26-7f20-43ef-922e-a9e63ee73709-config-data-merged\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.179592 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5a81fa26-7f20-43ef-922e-a9e63ee73709-hm-ports\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.185106 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-combined-ca-bundle\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.185125 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-scripts\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.185742 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-amphora-certs\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.189283 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81fa26-7f20-43ef-922e-a9e63ee73709-config-data\") pod \"octavia-worker-68w9j\" (UID: \"5a81fa26-7f20-43ef-922e-a9e63ee73709\") " pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.328169 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-68w9j" Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.954751 4931 generic.go:334] "Generic (PLEG): container finished" podID="2c0bd14d-9378-4c91-87e8-4ec9681103e0" containerID="8a52de91951bbdcb31ccc68846d7cc777f1b38ff89e122b595ea35fd9a76b55b" exitCode=0 Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.954837 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-k6c7h" event={"ID":"2c0bd14d-9378-4c91-87e8-4ec9681103e0","Type":"ContainerDied","Data":"8a52de91951bbdcb31ccc68846d7cc777f1b38ff89e122b595ea35fd9a76b55b"} Jan 30 06:44:15 crc kubenswrapper[4931]: I0130 06:44:15.982416 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-68w9j"] Jan 30 06:44:16 crc kubenswrapper[4931]: I0130 06:44:16.639548 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-k6c7h"] Jan 30 06:44:16 crc kubenswrapper[4931]: I0130 06:44:16.966242 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-k6c7h" event={"ID":"2c0bd14d-9378-4c91-87e8-4ec9681103e0","Type":"ContainerStarted","Data":"6122060f494296a8e4b2e1c6ee845d723adc0c686e59941c94d53816d5a63cda"} Jan 30 06:44:16 crc kubenswrapper[4931]: I0130 06:44:16.967239 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:16 crc kubenswrapper[4931]: I0130 06:44:16.967847 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-68w9j" event={"ID":"5a81fa26-7f20-43ef-922e-a9e63ee73709","Type":"ContainerStarted","Data":"f54d28225b778ea2ddc3b78776bb0991a57090591c3d8b47390b17a542b19f4e"} Jan 30 06:44:16 crc kubenswrapper[4931]: I0130 06:44:16.996709 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-k6c7h" podStartSLOduration=4.9966858819999995 podStartE2EDuration="4.996685882s" podCreationTimestamp="2026-01-30 06:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:44:16.987041692 +0000 UTC m=+5792.356951949" watchObservedRunningTime="2026-01-30 06:44:16.996685882 +0000 UTC m=+5792.366596149" Jan 30 06:44:17 crc kubenswrapper[4931]: I0130 06:44:17.988477 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-fc9fv" event={"ID":"8e6a5234-c995-4b65-afb5-e59eedb65e7f","Type":"ContainerStarted","Data":"8c7d323579524c3d074fa2b4072a03dfb7e2913bc31be87ed2bea09bc0e4423b"} Jan 30 06:44:19 crc kubenswrapper[4931]: I0130 06:44:19.002868 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-68w9j" event={"ID":"5a81fa26-7f20-43ef-922e-a9e63ee73709","Type":"ContainerStarted","Data":"f8c6954f1283d00375ad1adbb112c4e95428775aa61270f8fba6ef3d63b0d8ff"} Jan 30 06:44:19 crc kubenswrapper[4931]: I0130 06:44:19.007757 4931 generic.go:334] "Generic (PLEG): container finished" podID="8e6a5234-c995-4b65-afb5-e59eedb65e7f" containerID="8c7d323579524c3d074fa2b4072a03dfb7e2913bc31be87ed2bea09bc0e4423b" exitCode=0 Jan 30 06:44:19 crc kubenswrapper[4931]: I0130 06:44:19.007811 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-fc9fv" event={"ID":"8e6a5234-c995-4b65-afb5-e59eedb65e7f","Type":"ContainerDied","Data":"8c7d323579524c3d074fa2b4072a03dfb7e2913bc31be87ed2bea09bc0e4423b"} Jan 30 06:44:19 crc kubenswrapper[4931]: I0130 06:44:19.421995 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:44:19 crc kubenswrapper[4931]: E0130 06:44:19.422712 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:44:20 crc kubenswrapper[4931]: I0130 06:44:20.021877 4931 generic.go:334] "Generic (PLEG): container finished" podID="5a81fa26-7f20-43ef-922e-a9e63ee73709" containerID="f8c6954f1283d00375ad1adbb112c4e95428775aa61270f8fba6ef3d63b0d8ff" exitCode=0 Jan 30 06:44:20 crc kubenswrapper[4931]: I0130 06:44:20.022477 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-68w9j" event={"ID":"5a81fa26-7f20-43ef-922e-a9e63ee73709","Type":"ContainerDied","Data":"f8c6954f1283d00375ad1adbb112c4e95428775aa61270f8fba6ef3d63b0d8ff"} Jan 30 06:44:20 crc kubenswrapper[4931]: I0130 06:44:20.031055 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-fc9fv" event={"ID":"8e6a5234-c995-4b65-afb5-e59eedb65e7f","Type":"ContainerStarted","Data":"2f855c61c20ebef2e100f8d56770a0cf709039a79ad9daf792dbd7ee43daac79"} Jan 30 06:44:20 crc kubenswrapper[4931]: I0130 06:44:20.031247 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:20 crc kubenswrapper[4931]: I0130 06:44:20.081342 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-fc9fv" podStartSLOduration=5.11263152 podStartE2EDuration="7.081314347s" podCreationTimestamp="2026-01-30 06:44:13 +0000 UTC" firstStartedPulling="2026-01-30 06:44:14.885788906 +0000 UTC m=+5790.255699173" lastFinishedPulling="2026-01-30 06:44:16.854471743 +0000 UTC m=+5792.224382000" observedRunningTime="2026-01-30 06:44:20.065051232 +0000 UTC m=+5795.434961499" watchObservedRunningTime="2026-01-30 06:44:20.081314347 +0000 UTC m=+5795.451224644" Jan 30 06:44:21 crc kubenswrapper[4931]: I0130 06:44:21.051414 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-68w9j" event={"ID":"5a81fa26-7f20-43ef-922e-a9e63ee73709","Type":"ContainerStarted","Data":"11accab5544da0b7dc5a0d5765d3610b5d7ded09781153af52120e63dd5fe556"} Jan 30 06:44:21 crc kubenswrapper[4931]: I0130 06:44:21.052100 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-68w9j" Jan 30 06:44:21 crc kubenswrapper[4931]: I0130 06:44:21.088862 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-68w9j" podStartSLOduration=4.770406644 podStartE2EDuration="7.088838084s" podCreationTimestamp="2026-01-30 06:44:14 +0000 UTC" firstStartedPulling="2026-01-30 06:44:15.994055002 +0000 UTC m=+5791.363965259" lastFinishedPulling="2026-01-30 06:44:18.312486442 +0000 UTC m=+5793.682396699" observedRunningTime="2026-01-30 06:44:21.082594639 +0000 UTC m=+5796.452504926" watchObservedRunningTime="2026-01-30 06:44:21.088838084 +0000 UTC m=+5796.458748381" Jan 30 06:44:27 crc kubenswrapper[4931]: I0130 06:44:27.514708 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-k6c7h" Jan 30 06:44:29 crc kubenswrapper[4931]: I0130 06:44:29.343054 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-fc9fv" Jan 30 06:44:30 crc kubenswrapper[4931]: I0130 06:44:30.405932 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-68w9j" Jan 30 06:44:31 crc kubenswrapper[4931]: I0130 06:44:31.423307 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:44:31 crc kubenswrapper[4931]: E0130 06:44:31.423896 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:44:45 crc kubenswrapper[4931]: I0130 06:44:45.441996 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:44:45 crc kubenswrapper[4931]: E0130 06:44:45.442925 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.159509 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb"] Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.161842 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.164922 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.165177 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.189582 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb"] Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.203695 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0d02c3-f482-4b3c-b015-544fc50919b7-secret-volume\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.203800 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0d02c3-f482-4b3c-b015-544fc50919b7-config-volume\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.203921 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6vjx\" (UniqueName: \"kubernetes.io/projected/3d0d02c3-f482-4b3c-b015-544fc50919b7-kube-api-access-k6vjx\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.305827 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0d02c3-f482-4b3c-b015-544fc50919b7-secret-volume\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.305911 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0d02c3-f482-4b3c-b015-544fc50919b7-config-volume\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.306020 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6vjx\" (UniqueName: \"kubernetes.io/projected/3d0d02c3-f482-4b3c-b015-544fc50919b7-kube-api-access-k6vjx\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.307628 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0d02c3-f482-4b3c-b015-544fc50919b7-config-volume\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.312109 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0d02c3-f482-4b3c-b015-544fc50919b7-secret-volume\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.326455 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6vjx\" (UniqueName: \"kubernetes.io/projected/3d0d02c3-f482-4b3c-b015-544fc50919b7-kube-api-access-k6vjx\") pod \"collect-profiles-29495925-rsbpb\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.448290 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:45:00 crc kubenswrapper[4931]: E0130 06:45:00.448820 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.498281 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.685285 4931 scope.go:117] "RemoveContainer" containerID="a67c2af59ce774fac5d99d16e2c4d0308297f692031a8a81e468f5bef97702ac" Jan 30 06:45:00 crc kubenswrapper[4931]: I0130 06:45:00.722933 4931 scope.go:117] "RemoveContainer" containerID="7956c67be4873a213d6ce531a234902ab8e420fef8d81bd2f9cc50a55b2ed19e" Jan 30 06:45:01 crc kubenswrapper[4931]: I0130 06:45:01.006145 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb"] Jan 30 06:45:01 crc kubenswrapper[4931]: I0130 06:45:01.638661 4931 generic.go:334] "Generic (PLEG): container finished" podID="3d0d02c3-f482-4b3c-b015-544fc50919b7" containerID="a28d2da22cb75022f4cebf8bfd4527b7d329cedd3a234cdb0a658e8ba3685b7e" exitCode=0 Jan 30 06:45:01 crc kubenswrapper[4931]: I0130 06:45:01.638728 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" event={"ID":"3d0d02c3-f482-4b3c-b015-544fc50919b7","Type":"ContainerDied","Data":"a28d2da22cb75022f4cebf8bfd4527b7d329cedd3a234cdb0a658e8ba3685b7e"} Jan 30 06:45:01 crc kubenswrapper[4931]: I0130 06:45:01.639671 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" event={"ID":"3d0d02c3-f482-4b3c-b015-544fc50919b7","Type":"ContainerStarted","Data":"a209fefa99ef75ed104126fe67e2831923a710d9621f51f36a272702f29e5f02"} Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.116964 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.216507 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6vjx\" (UniqueName: \"kubernetes.io/projected/3d0d02c3-f482-4b3c-b015-544fc50919b7-kube-api-access-k6vjx\") pod \"3d0d02c3-f482-4b3c-b015-544fc50919b7\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.216749 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0d02c3-f482-4b3c-b015-544fc50919b7-config-volume\") pod \"3d0d02c3-f482-4b3c-b015-544fc50919b7\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.216887 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0d02c3-f482-4b3c-b015-544fc50919b7-secret-volume\") pod \"3d0d02c3-f482-4b3c-b015-544fc50919b7\" (UID: \"3d0d02c3-f482-4b3c-b015-544fc50919b7\") " Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.218123 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d0d02c3-f482-4b3c-b015-544fc50919b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d0d02c3-f482-4b3c-b015-544fc50919b7" (UID: "3d0d02c3-f482-4b3c-b015-544fc50919b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.233793 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0d02c3-f482-4b3c-b015-544fc50919b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d0d02c3-f482-4b3c-b015-544fc50919b7" (UID: "3d0d02c3-f482-4b3c-b015-544fc50919b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.233944 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0d02c3-f482-4b3c-b015-544fc50919b7-kube-api-access-k6vjx" (OuterVolumeSpecName: "kube-api-access-k6vjx") pod "3d0d02c3-f482-4b3c-b015-544fc50919b7" (UID: "3d0d02c3-f482-4b3c-b015-544fc50919b7"). InnerVolumeSpecName "kube-api-access-k6vjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.318583 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d0d02c3-f482-4b3c-b015-544fc50919b7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.318809 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6vjx\" (UniqueName: \"kubernetes.io/projected/3d0d02c3-f482-4b3c-b015-544fc50919b7-kube-api-access-k6vjx\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.318869 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d0d02c3-f482-4b3c-b015-544fc50919b7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.662344 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" event={"ID":"3d0d02c3-f482-4b3c-b015-544fc50919b7","Type":"ContainerDied","Data":"a209fefa99ef75ed104126fe67e2831923a710d9621f51f36a272702f29e5f02"} Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.662387 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a209fefa99ef75ed104126fe67e2831923a710d9621f51f36a272702f29e5f02" Jan 30 06:45:03 crc kubenswrapper[4931]: I0130 06:45:03.662811 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495925-rsbpb" Jan 30 06:45:04 crc kubenswrapper[4931]: I0130 06:45:04.209176 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g"] Jan 30 06:45:04 crc kubenswrapper[4931]: I0130 06:45:04.224244 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495880-jrm5g"] Jan 30 06:45:05 crc kubenswrapper[4931]: I0130 06:45:05.441456 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba" path="/var/lib/kubelet/pods/ffc6cfb6-b46d-49b4-b05d-e9c2b033a1ba/volumes" Jan 30 06:45:12 crc kubenswrapper[4931]: I0130 06:45:12.050730 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-cxbxk"] Jan 30 06:45:12 crc kubenswrapper[4931]: I0130 06:45:12.062502 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-64f9-account-create-update-sm7kp"] Jan 30 06:45:12 crc kubenswrapper[4931]: I0130 06:45:12.072903 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-cxbxk"] Jan 30 06:45:12 crc kubenswrapper[4931]: I0130 06:45:12.081836 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-64f9-account-create-update-sm7kp"] Jan 30 06:45:13 crc kubenswrapper[4931]: I0130 06:45:13.438417 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8b686f-89e9-4561-b4da-73c3087f1913" path="/var/lib/kubelet/pods/7e8b686f-89e9-4561-b4da-73c3087f1913/volumes" Jan 30 06:45:13 crc kubenswrapper[4931]: I0130 06:45:13.440124 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1237d07-19d9-47bb-8fb8-42e905fcc41b" path="/var/lib/kubelet/pods/f1237d07-19d9-47bb-8fb8-42e905fcc41b/volumes" Jan 30 06:45:14 crc kubenswrapper[4931]: I0130 06:45:14.422729 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:45:14 crc kubenswrapper[4931]: E0130 06:45:14.423345 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.028850 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fd79f5877-c4n2v"] Jan 30 06:45:17 crc kubenswrapper[4931]: E0130 06:45:17.029718 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0d02c3-f482-4b3c-b015-544fc50919b7" containerName="collect-profiles" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.029732 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0d02c3-f482-4b3c-b015-544fc50919b7" containerName="collect-profiles" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.029914 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0d02c3-f482-4b3c-b015-544fc50919b7" containerName="collect-profiles" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.031032 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.038393 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zzqsh"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.039670 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.039725 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.039872 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-s44jf" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.039972 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.048909 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zzqsh"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.057954 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fd79f5877-c4n2v"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.094545 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.094779 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-log" containerID="cri-o://d05f5e569f00e57864985b93b02a9cff6181003207de011c326a946bc0f2b2f3" gracePeriod=30 Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.094859 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-httpd" containerID="cri-o://9f0644fcac396be5d0abd6c6b2b21170c61642dc6f5db73f638093f772151226" gracePeriod=30 Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.122655 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-scripts\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.122742 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00dba7a3-7492-4010-9931-1ed387dc22a7-logs\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.122762 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00dba7a3-7492-4010-9931-1ed387dc22a7-horizon-secret-key\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.122891 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-config-data\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.122929 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/00dba7a3-7492-4010-9931-1ed387dc22a7-kube-api-access-l5pmn\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.142069 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86ccfbfc65-5jz59"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.143910 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.158299 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.158629 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-log" containerID="cri-o://32f7ab137db348695ddb60d19f60238d0ad9feb2d7bfb8d7925247bd1ac76d50" gracePeriod=30 Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.158771 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-httpd" containerID="cri-o://d2e2adf695545abae4fa1879a24ad260b75515f25d086b16166bdfe80e55cc22" gracePeriod=30 Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.189773 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86ccfbfc65-5jz59"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.224925 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-config-data\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.224986 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-scripts\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225005 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/00dba7a3-7492-4010-9931-1ed387dc22a7-kube-api-access-l5pmn\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225031 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-config-data\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225051 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdtv\" (UniqueName: \"kubernetes.io/projected/e180809a-b692-42c0-b821-723afe805954-kube-api-access-7hdtv\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225075 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e180809a-b692-42c0-b821-723afe805954-horizon-secret-key\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225104 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-scripts\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225271 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00dba7a3-7492-4010-9931-1ed387dc22a7-logs\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225312 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00dba7a3-7492-4010-9931-1ed387dc22a7-horizon-secret-key\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225515 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e180809a-b692-42c0-b821-723afe805954-logs\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.225809 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00dba7a3-7492-4010-9931-1ed387dc22a7-logs\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.226093 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-scripts\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.227107 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-config-data\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.236091 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00dba7a3-7492-4010-9931-1ed387dc22a7-horizon-secret-key\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.239734 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/00dba7a3-7492-4010-9931-1ed387dc22a7-kube-api-access-l5pmn\") pod \"horizon-5fd79f5877-c4n2v\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.327053 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-config-data\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.327097 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hdtv\" (UniqueName: \"kubernetes.io/projected/e180809a-b692-42c0-b821-723afe805954-kube-api-access-7hdtv\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.327126 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e180809a-b692-42c0-b821-723afe805954-horizon-secret-key\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.327256 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e180809a-b692-42c0-b821-723afe805954-logs\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.327315 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-scripts\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.328002 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-scripts\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.328324 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e180809a-b692-42c0-b821-723afe805954-logs\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.328744 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-config-data\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.331329 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e180809a-b692-42c0-b821-723afe805954-horizon-secret-key\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.345717 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.348352 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hdtv\" (UniqueName: \"kubernetes.io/projected/e180809a-b692-42c0-b821-723afe805954-kube-api-access-7hdtv\") pod \"horizon-86ccfbfc65-5jz59\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.431964 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60872807-e034-4844-9f79-8005640c308c" path="/var/lib/kubelet/pods/60872807-e034-4844-9f79-8005640c308c/volumes" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.493124 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.736720 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fd79f5877-c4n2v"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.759603 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b8f5df775-m6dvd"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.761718 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.776949 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b8f5df775-m6dvd"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.805666 4931 generic.go:334] "Generic (PLEG): container finished" podID="50e397ef-0630-40db-a591-28d7584dee76" containerID="32f7ab137db348695ddb60d19f60238d0ad9feb2d7bfb8d7925247bd1ac76d50" exitCode=143 Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.805712 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50e397ef-0630-40db-a591-28d7584dee76","Type":"ContainerDied","Data":"32f7ab137db348695ddb60d19f60238d0ad9feb2d7bfb8d7925247bd1ac76d50"} Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.809935 4931 generic.go:334] "Generic (PLEG): container finished" podID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerID="d05f5e569f00e57864985b93b02a9cff6181003207de011c326a946bc0f2b2f3" exitCode=143 Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.809959 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ab7585b-916e-4a6a-8aa8-da769aaa437e","Type":"ContainerDied","Data":"d05f5e569f00e57864985b93b02a9cff6181003207de011c326a946bc0f2b2f3"} Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.821367 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fd79f5877-c4n2v"] Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.844214 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-config-data\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.844354 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-477f8\" (UniqueName: \"kubernetes.io/projected/1d524b32-d060-41f3-88a6-d5339c438fff-kube-api-access-477f8\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.844394 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d524b32-d060-41f3-88a6-d5339c438fff-horizon-secret-key\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.844436 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-scripts\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.844461 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d524b32-d060-41f3-88a6-d5339c438fff-logs\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.946392 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-477f8\" (UniqueName: \"kubernetes.io/projected/1d524b32-d060-41f3-88a6-d5339c438fff-kube-api-access-477f8\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.946488 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d524b32-d060-41f3-88a6-d5339c438fff-horizon-secret-key\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.946521 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-scripts\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.946550 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d524b32-d060-41f3-88a6-d5339c438fff-logs\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.946592 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-config-data\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.948094 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-config-data\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.949580 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d524b32-d060-41f3-88a6-d5339c438fff-logs\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.950106 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-scripts\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.962747 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d524b32-d060-41f3-88a6-d5339c438fff-horizon-secret-key\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.968118 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-477f8\" (UniqueName: \"kubernetes.io/projected/1d524b32-d060-41f3-88a6-d5339c438fff-kube-api-access-477f8\") pod \"horizon-7b8f5df775-m6dvd\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:17 crc kubenswrapper[4931]: I0130 06:45:17.992461 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86ccfbfc65-5jz59"] Jan 30 06:45:17 crc kubenswrapper[4931]: W0130 06:45:17.995387 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode180809a_b692_42c0_b821_723afe805954.slice/crio-6e7fd796d4cb0d311b3d0a188a2af32435b8c86d4ea09b091cca87d30bbf2b5d WatchSource:0}: Error finding container 6e7fd796d4cb0d311b3d0a188a2af32435b8c86d4ea09b091cca87d30bbf2b5d: Status 404 returned error can't find the container with id 6e7fd796d4cb0d311b3d0a188a2af32435b8c86d4ea09b091cca87d30bbf2b5d Jan 30 06:45:18 crc kubenswrapper[4931]: I0130 06:45:18.083745 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:18 crc kubenswrapper[4931]: I0130 06:45:18.585691 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b8f5df775-m6dvd"] Jan 30 06:45:18 crc kubenswrapper[4931]: W0130 06:45:18.588198 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d524b32_d060_41f3_88a6_d5339c438fff.slice/crio-b4e9652d680d0769885e340ea1c5e364bdd99c6ea6b532de8c239df55ba5c48d WatchSource:0}: Error finding container b4e9652d680d0769885e340ea1c5e364bdd99c6ea6b532de8c239df55ba5c48d: Status 404 returned error can't find the container with id b4e9652d680d0769885e340ea1c5e364bdd99c6ea6b532de8c239df55ba5c48d Jan 30 06:45:18 crc kubenswrapper[4931]: I0130 06:45:18.827980 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd79f5877-c4n2v" event={"ID":"00dba7a3-7492-4010-9931-1ed387dc22a7","Type":"ContainerStarted","Data":"b88b4f7f43f890396ef5c3306741e800af49790969d1f953fb512e6e66e5766b"} Jan 30 06:45:18 crc kubenswrapper[4931]: I0130 06:45:18.832867 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ccfbfc65-5jz59" event={"ID":"e180809a-b692-42c0-b821-723afe805954","Type":"ContainerStarted","Data":"6e7fd796d4cb0d311b3d0a188a2af32435b8c86d4ea09b091cca87d30bbf2b5d"} Jan 30 06:45:18 crc kubenswrapper[4931]: I0130 06:45:18.835389 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8f5df775-m6dvd" event={"ID":"1d524b32-d060-41f3-88a6-d5339c438fff","Type":"ContainerStarted","Data":"b4e9652d680d0769885e340ea1c5e364bdd99c6ea6b532de8c239df55ba5c48d"} Jan 30 06:45:20 crc kubenswrapper[4931]: I0130 06:45:20.256293 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.47:9292/healthcheck\": read tcp 10.217.0.2:53626->10.217.1.47:9292: read: connection reset by peer" Jan 30 06:45:20 crc kubenswrapper[4931]: I0130 06:45:20.256352 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.47:9292/healthcheck\": read tcp 10.217.0.2:53638->10.217.1.47:9292: read: connection reset by peer" Jan 30 06:45:20 crc kubenswrapper[4931]: I0130 06:45:20.861053 4931 generic.go:334] "Generic (PLEG): container finished" podID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerID="9f0644fcac396be5d0abd6c6b2b21170c61642dc6f5db73f638093f772151226" exitCode=0 Jan 30 06:45:20 crc kubenswrapper[4931]: I0130 06:45:20.861114 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ab7585b-916e-4a6a-8aa8-da769aaa437e","Type":"ContainerDied","Data":"9f0644fcac396be5d0abd6c6b2b21170c61642dc6f5db73f638093f772151226"} Jan 30 06:45:20 crc kubenswrapper[4931]: I0130 06:45:20.863970 4931 generic.go:334] "Generic (PLEG): container finished" podID="50e397ef-0630-40db-a591-28d7584dee76" containerID="d2e2adf695545abae4fa1879a24ad260b75515f25d086b16166bdfe80e55cc22" exitCode=0 Jan 30 06:45:20 crc kubenswrapper[4931]: I0130 06:45:20.863991 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50e397ef-0630-40db-a591-28d7584dee76","Type":"ContainerDied","Data":"d2e2adf695545abae4fa1879a24ad260b75515f25d086b16166bdfe80e55cc22"} Jan 30 06:45:21 crc kubenswrapper[4931]: I0130 06:45:21.250029 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.48:9292/healthcheck\": dial tcp 10.217.1.48:9292: connect: connection refused" Jan 30 06:45:21 crc kubenswrapper[4931]: I0130 06:45:21.250159 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.48:9292/healthcheck\": dial tcp 10.217.1.48:9292: connect: connection refused" Jan 30 06:45:24 crc kubenswrapper[4931]: I0130 06:45:24.948753 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.000084 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-config-data\") pod \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.000220 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-httpd-run\") pod \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.000251 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-scripts\") pod \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.000268 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-combined-ca-bundle\") pod \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.000285 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-logs\") pod \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.000341 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-ceph\") pod \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.000374 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8j2s\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-kube-api-access-h8j2s\") pod \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\" (UID: \"7ab7585b-916e-4a6a-8aa8-da769aaa437e\") " Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.001267 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-logs" (OuterVolumeSpecName: "logs") pod "7ab7585b-916e-4a6a-8aa8-da769aaa437e" (UID: "7ab7585b-916e-4a6a-8aa8-da769aaa437e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.001731 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7ab7585b-916e-4a6a-8aa8-da769aaa437e" (UID: "7ab7585b-916e-4a6a-8aa8-da769aaa437e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.004935 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.004959 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ab7585b-916e-4a6a-8aa8-da769aaa437e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.005788 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-kube-api-access-h8j2s" (OuterVolumeSpecName: "kube-api-access-h8j2s") pod "7ab7585b-916e-4a6a-8aa8-da769aaa437e" (UID: "7ab7585b-916e-4a6a-8aa8-da769aaa437e"). InnerVolumeSpecName "kube-api-access-h8j2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.011531 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-ceph" (OuterVolumeSpecName: "ceph") pod "7ab7585b-916e-4a6a-8aa8-da769aaa437e" (UID: "7ab7585b-916e-4a6a-8aa8-da769aaa437e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.018676 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-scripts" (OuterVolumeSpecName: "scripts") pod "7ab7585b-916e-4a6a-8aa8-da769aaa437e" (UID: "7ab7585b-916e-4a6a-8aa8-da769aaa437e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.039216 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ab7585b-916e-4a6a-8aa8-da769aaa437e" (UID: "7ab7585b-916e-4a6a-8aa8-da769aaa437e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.081554 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-config-data" (OuterVolumeSpecName: "config-data") pod "7ab7585b-916e-4a6a-8aa8-da769aaa437e" (UID: "7ab7585b-916e-4a6a-8aa8-da769aaa437e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.107386 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.107448 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.107459 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab7585b-916e-4a6a-8aa8-da769aaa437e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.107469 4931 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.107477 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8j2s\" (UniqueName: \"kubernetes.io/projected/7ab7585b-916e-4a6a-8aa8-da769aaa437e-kube-api-access-h8j2s\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.925954 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd79f5877-c4n2v" event={"ID":"00dba7a3-7492-4010-9931-1ed387dc22a7","Type":"ContainerStarted","Data":"0ac695e770aa1e4dca8f308d0921c4323577d50a3d4f87168b23f7117ef54020"} Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.926295 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd79f5877-c4n2v" event={"ID":"00dba7a3-7492-4010-9931-1ed387dc22a7","Type":"ContainerStarted","Data":"3f69e23981e22c16e2b3eb960775aef58a246bf4d2a400c78b43e6f979492f62"} Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.926221 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fd79f5877-c4n2v" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon" containerID="cri-o://0ac695e770aa1e4dca8f308d0921c4323577d50a3d4f87168b23f7117ef54020" gracePeriod=30 Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.926102 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fd79f5877-c4n2v" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon-log" containerID="cri-o://3f69e23981e22c16e2b3eb960775aef58a246bf4d2a400c78b43e6f979492f62" gracePeriod=30 Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.931090 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ccfbfc65-5jz59" event={"ID":"e180809a-b692-42c0-b821-723afe805954","Type":"ContainerStarted","Data":"700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111"} Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.931133 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ccfbfc65-5jz59" event={"ID":"e180809a-b692-42c0-b821-723afe805954","Type":"ContainerStarted","Data":"72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0"} Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.936699 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8f5df775-m6dvd" event={"ID":"1d524b32-d060-41f3-88a6-d5339c438fff","Type":"ContainerStarted","Data":"85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88"} Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.936758 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8f5df775-m6dvd" event={"ID":"1d524b32-d060-41f3-88a6-d5339c438fff","Type":"ContainerStarted","Data":"16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb"} Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.939952 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ab7585b-916e-4a6a-8aa8-da769aaa437e","Type":"ContainerDied","Data":"266032ac228e0593b292c8f7becc87b38799dd433a3129a19387d1cf6ec27145"} Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.940017 4931 scope.go:117] "RemoveContainer" containerID="9f0644fcac396be5d0abd6c6b2b21170c61642dc6f5db73f638093f772151226" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.940182 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.970324 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fd79f5877-c4n2v" podStartSLOduration=2.122172865 podStartE2EDuration="8.970297948s" podCreationTimestamp="2026-01-30 06:45:17 +0000 UTC" firstStartedPulling="2026-01-30 06:45:17.851000253 +0000 UTC m=+5853.220910510" lastFinishedPulling="2026-01-30 06:45:24.699125336 +0000 UTC m=+5860.069035593" observedRunningTime="2026-01-30 06:45:25.951823001 +0000 UTC m=+5861.321733288" watchObservedRunningTime="2026-01-30 06:45:25.970297948 +0000 UTC m=+5861.340208215" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.994531 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b8f5df775-m6dvd" podStartSLOduration=2.916776204 podStartE2EDuration="8.994516425s" podCreationTimestamp="2026-01-30 06:45:17 +0000 UTC" firstStartedPulling="2026-01-30 06:45:18.591961342 +0000 UTC m=+5853.961871599" lastFinishedPulling="2026-01-30 06:45:24.669701563 +0000 UTC m=+5860.039611820" observedRunningTime="2026-01-30 06:45:25.984341651 +0000 UTC m=+5861.354251918" watchObservedRunningTime="2026-01-30 06:45:25.994516425 +0000 UTC m=+5861.364426682" Jan 30 06:45:25 crc kubenswrapper[4931]: I0130 06:45:25.995034 4931 scope.go:117] "RemoveContainer" containerID="d05f5e569f00e57864985b93b02a9cff6181003207de011c326a946bc0f2b2f3" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.018979 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-86ccfbfc65-5jz59" podStartSLOduration=2.352468168 podStartE2EDuration="9.018958709s" podCreationTimestamp="2026-01-30 06:45:17 +0000 UTC" firstStartedPulling="2026-01-30 06:45:17.99887682 +0000 UTC m=+5853.368787077" lastFinishedPulling="2026-01-30 06:45:24.665367371 +0000 UTC m=+5860.035277618" observedRunningTime="2026-01-30 06:45:26.010817511 +0000 UTC m=+5861.380727768" watchObservedRunningTime="2026-01-30 06:45:26.018958709 +0000 UTC m=+5861.388868966" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.040613 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.093665 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.104250 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:45:26 crc kubenswrapper[4931]: E0130 06:45:26.105007 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-log" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.105027 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-log" Jan 30 06:45:26 crc kubenswrapper[4931]: E0130 06:45:26.105053 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-httpd" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.105060 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-httpd" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.105392 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-log" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.105442 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" containerName="glance-httpd" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.112734 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.114275 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.117667 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.145412 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234226 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-scripts\") pod \"50e397ef-0630-40db-a591-28d7584dee76\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234328 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-httpd-run\") pod \"50e397ef-0630-40db-a591-28d7584dee76\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234391 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-logs\") pod \"50e397ef-0630-40db-a591-28d7584dee76\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234444 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-ceph\") pod \"50e397ef-0630-40db-a591-28d7584dee76\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234473 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-config-data\") pod \"50e397ef-0630-40db-a591-28d7584dee76\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234536 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-combined-ca-bundle\") pod \"50e397ef-0630-40db-a591-28d7584dee76\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234576 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7hqx\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-kube-api-access-x7hqx\") pod \"50e397ef-0630-40db-a591-28d7584dee76\" (UID: \"50e397ef-0630-40db-a591-28d7584dee76\") " Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234846 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234924 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94f5b24a-840b-4206-a190-63cd6339ed70-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.234962 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94f5b24a-840b-4206-a190-63cd6339ed70-logs\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.235013 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bj8j\" (UniqueName: \"kubernetes.io/projected/94f5b24a-840b-4206-a190-63cd6339ed70-kube-api-access-8bj8j\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.235051 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-scripts\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.235096 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94f5b24a-840b-4206-a190-63cd6339ed70-ceph\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.235123 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-config-data\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.258878 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "50e397ef-0630-40db-a591-28d7584dee76" (UID: "50e397ef-0630-40db-a591-28d7584dee76"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.259197 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-logs" (OuterVolumeSpecName: "logs") pod "50e397ef-0630-40db-a591-28d7584dee76" (UID: "50e397ef-0630-40db-a591-28d7584dee76"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.267091 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-kube-api-access-x7hqx" (OuterVolumeSpecName: "kube-api-access-x7hqx") pod "50e397ef-0630-40db-a591-28d7584dee76" (UID: "50e397ef-0630-40db-a591-28d7584dee76"). InnerVolumeSpecName "kube-api-access-x7hqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.269549 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-ceph" (OuterVolumeSpecName: "ceph") pod "50e397ef-0630-40db-a591-28d7584dee76" (UID: "50e397ef-0630-40db-a591-28d7584dee76"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.269677 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-scripts" (OuterVolumeSpecName: "scripts") pod "50e397ef-0630-40db-a591-28d7584dee76" (UID: "50e397ef-0630-40db-a591-28d7584dee76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.333570 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50e397ef-0630-40db-a591-28d7584dee76" (UID: "50e397ef-0630-40db-a591-28d7584dee76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.336447 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bj8j\" (UniqueName: \"kubernetes.io/projected/94f5b24a-840b-4206-a190-63cd6339ed70-kube-api-access-8bj8j\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.336623 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-scripts\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.336745 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94f5b24a-840b-4206-a190-63cd6339ed70-ceph\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.336827 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-config-data\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.336910 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337016 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94f5b24a-840b-4206-a190-63cd6339ed70-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337106 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94f5b24a-840b-4206-a190-63cd6339ed70-logs\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337234 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337302 4931 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337369 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50e397ef-0630-40db-a591-28d7584dee76-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337442 4931 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-ceph\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337506 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.337576 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7hqx\" (UniqueName: \"kubernetes.io/projected/50e397ef-0630-40db-a591-28d7584dee76-kube-api-access-x7hqx\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.338010 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94f5b24a-840b-4206-a190-63cd6339ed70-logs\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.345972 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94f5b24a-840b-4206-a190-63cd6339ed70-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.359556 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-config-data" (OuterVolumeSpecName: "config-data") pod "50e397ef-0630-40db-a591-28d7584dee76" (UID: "50e397ef-0630-40db-a591-28d7584dee76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.360676 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/94f5b24a-840b-4206-a190-63cd6339ed70-ceph\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.361976 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.362924 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-config-data\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.364037 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94f5b24a-840b-4206-a190-63cd6339ed70-scripts\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.370202 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bj8j\" (UniqueName: \"kubernetes.io/projected/94f5b24a-840b-4206-a190-63cd6339ed70-kube-api-access-8bj8j\") pod \"glance-default-external-api-0\" (UID: \"94f5b24a-840b-4206-a190-63cd6339ed70\") " pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.441994 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50e397ef-0630-40db-a591-28d7584dee76-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.467911 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.954409 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.954402 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50e397ef-0630-40db-a591-28d7584dee76","Type":"ContainerDied","Data":"a16e5756f39cf431b53b36d453d1cc052129da02cb0cac04cc9d6bca4777d6a5"} Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.954824 4931 scope.go:117] "RemoveContainer" containerID="d2e2adf695545abae4fa1879a24ad260b75515f25d086b16166bdfe80e55cc22" Jan 30 06:45:26 crc kubenswrapper[4931]: I0130 06:45:26.991403 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:26.999539 4931 scope.go:117] "RemoveContainer" containerID="32f7ab137db348695ddb60d19f60238d0ad9feb2d7bfb8d7925247bd1ac76d50" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.005361 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.027240 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:45:27 crc kubenswrapper[4931]: E0130 06:45:27.027971 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-httpd" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.028003 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-httpd" Jan 30 06:45:27 crc kubenswrapper[4931]: E0130 06:45:27.028047 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-log" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.028060 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-log" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.028448 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-log" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.028486 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e397ef-0630-40db-a591-28d7584dee76" containerName="glance-httpd" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.029984 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.032916 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.038493 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.057357 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.057414 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.057582 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.057609 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqhxj\" (UniqueName: \"kubernetes.io/projected/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-kube-api-access-sqhxj\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.057629 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.057644 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.057692 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.113111 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.159127 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.159465 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqhxj\" (UniqueName: \"kubernetes.io/projected/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-kube-api-access-sqhxj\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.159499 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.159522 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.159587 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.159712 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.159749 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.160351 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.160461 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.165674 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.168711 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.169079 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.172396 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.177239 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqhxj\" (UniqueName: \"kubernetes.io/projected/ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf-kube-api-access-sqhxj\") pod \"glance-default-internal-api-0\" (UID: \"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf\") " pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.346734 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.358281 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.438163 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e397ef-0630-40db-a591-28d7584dee76" path="/var/lib/kubelet/pods/50e397ef-0630-40db-a591-28d7584dee76/volumes" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.439849 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab7585b-916e-4a6a-8aa8-da769aaa437e" path="/var/lib/kubelet/pods/7ab7585b-916e-4a6a-8aa8-da769aaa437e/volumes" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.494412 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.494481 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.922809 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.974771 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf","Type":"ContainerStarted","Data":"920850b60e6a3bf1f1b66ac4ba177b4f4f3a4070aebbad1d428058f2829d34ce"} Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.977171 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94f5b24a-840b-4206-a190-63cd6339ed70","Type":"ContainerStarted","Data":"31ba2be01ceb4b04307a9e18ff07ff463cb8f2d42b16f273c5db1453b725fb17"} Jan 30 06:45:27 crc kubenswrapper[4931]: I0130 06:45:27.977220 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94f5b24a-840b-4206-a190-63cd6339ed70","Type":"ContainerStarted","Data":"c9c30a1dee136eec4233191c93835fcc4ffceca99e62438447aa5776af22fd73"} Jan 30 06:45:28 crc kubenswrapper[4931]: I0130 06:45:28.084654 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:28 crc kubenswrapper[4931]: I0130 06:45:28.084714 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:28 crc kubenswrapper[4931]: I0130 06:45:28.994713 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94f5b24a-840b-4206-a190-63cd6339ed70","Type":"ContainerStarted","Data":"b86c394a31d53bfff5771d20ccce9ccb2419535de7783460e0495582b005d673"} Jan 30 06:45:28 crc kubenswrapper[4931]: I0130 06:45:28.998858 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf","Type":"ContainerStarted","Data":"bca631f700ba78e2d22b5877ca51c64be36cfb8db82d0ffd7a84a8c2aa1dba36"} Jan 30 06:45:29 crc kubenswrapper[4931]: I0130 06:45:29.023212 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.023182796 podStartE2EDuration="3.023182796s" podCreationTimestamp="2026-01-30 06:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:45:29.012349253 +0000 UTC m=+5864.382259530" watchObservedRunningTime="2026-01-30 06:45:29.023182796 +0000 UTC m=+5864.393093093" Jan 30 06:45:29 crc kubenswrapper[4931]: I0130 06:45:29.422334 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:45:29 crc kubenswrapper[4931]: E0130 06:45:29.422929 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:45:30 crc kubenswrapper[4931]: I0130 06:45:30.010498 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf","Type":"ContainerStarted","Data":"f7ec83d39cecd08a7993701455bb9e15718ca83c5aaa26f24f3622dc0ee7ecb9"} Jan 30 06:45:30 crc kubenswrapper[4931]: I0130 06:45:30.040802 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.040777624 podStartE2EDuration="4.040777624s" podCreationTimestamp="2026-01-30 06:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:45:30.029733945 +0000 UTC m=+5865.399644222" watchObservedRunningTime="2026-01-30 06:45:30.040777624 +0000 UTC m=+5865.410687901" Jan 30 06:45:36 crc kubenswrapper[4931]: I0130 06:45:36.469464 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 06:45:36 crc kubenswrapper[4931]: I0130 06:45:36.470236 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 06:45:36 crc kubenswrapper[4931]: I0130 06:45:36.539254 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 06:45:36 crc kubenswrapper[4931]: I0130 06:45:36.562085 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 06:45:37 crc kubenswrapper[4931]: I0130 06:45:37.119284 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 06:45:37 crc kubenswrapper[4931]: I0130 06:45:37.119360 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 06:45:37 crc kubenswrapper[4931]: I0130 06:45:37.359225 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:37 crc kubenswrapper[4931]: I0130 06:45:37.359493 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:37 crc kubenswrapper[4931]: I0130 06:45:37.399257 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:37 crc kubenswrapper[4931]: I0130 06:45:37.445761 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:37 crc kubenswrapper[4931]: I0130 06:45:37.497250 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-86ccfbfc65-5jz59" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 06:45:38 crc kubenswrapper[4931]: I0130 06:45:38.087037 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b8f5df775-m6dvd" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 06:45:38 crc kubenswrapper[4931]: I0130 06:45:38.132677 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:38 crc kubenswrapper[4931]: I0130 06:45:38.133043 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:39 crc kubenswrapper[4931]: I0130 06:45:38.999310 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 06:45:39 crc kubenswrapper[4931]: I0130 06:45:39.143350 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:45:39 crc kubenswrapper[4931]: I0130 06:45:39.403112 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 06:45:40 crc kubenswrapper[4931]: I0130 06:45:40.185908 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:40 crc kubenswrapper[4931]: I0130 06:45:40.186206 4931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 06:45:40 crc kubenswrapper[4931]: I0130 06:45:40.307664 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 06:45:42 crc kubenswrapper[4931]: I0130 06:45:42.424893 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:45:42 crc kubenswrapper[4931]: E0130 06:45:42.427209 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:45:46 crc kubenswrapper[4931]: I0130 06:45:46.064601 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ee04-account-create-update-5mxt8"] Jan 30 06:45:46 crc kubenswrapper[4931]: I0130 06:45:46.083533 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-w8ln6"] Jan 30 06:45:46 crc kubenswrapper[4931]: I0130 06:45:46.094148 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ee04-account-create-update-5mxt8"] Jan 30 06:45:46 crc kubenswrapper[4931]: I0130 06:45:46.103477 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-w8ln6"] Jan 30 06:45:47 crc kubenswrapper[4931]: I0130 06:45:47.443656 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259088b5-f22c-4773-a526-5ce0d618a3c9" path="/var/lib/kubelet/pods/259088b5-f22c-4773-a526-5ce0d618a3c9/volumes" Jan 30 06:45:47 crc kubenswrapper[4931]: I0130 06:45:47.446982 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da3ee3e2-1067-4d91-8780-4ee1442ddccd" path="/var/lib/kubelet/pods/da3ee3e2-1067-4d91-8780-4ee1442ddccd/volumes" Jan 30 06:45:49 crc kubenswrapper[4931]: I0130 06:45:49.285208 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:49 crc kubenswrapper[4931]: I0130 06:45:49.852058 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:51 crc kubenswrapper[4931]: I0130 06:45:51.061921 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:45:51 crc kubenswrapper[4931]: I0130 06:45:51.569580 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:45:51 crc kubenswrapper[4931]: I0130 06:45:51.643711 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86ccfbfc65-5jz59"] Jan 30 06:45:51 crc kubenswrapper[4931]: I0130 06:45:51.644080 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86ccfbfc65-5jz59" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon-log" containerID="cri-o://72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0" gracePeriod=30 Jan 30 06:45:51 crc kubenswrapper[4931]: I0130 06:45:51.644217 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86ccfbfc65-5jz59" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" containerID="cri-o://700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111" gracePeriod=30 Jan 30 06:45:53 crc kubenswrapper[4931]: I0130 06:45:53.422936 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:45:53 crc kubenswrapper[4931]: E0130 06:45:53.423747 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:45:55 crc kubenswrapper[4931]: I0130 06:45:55.056922 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vxl4f"] Jan 30 06:45:55 crc kubenswrapper[4931]: I0130 06:45:55.075050 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vxl4f"] Jan 30 06:45:55 crc kubenswrapper[4931]: I0130 06:45:55.345311 4931 generic.go:334] "Generic (PLEG): container finished" podID="e180809a-b692-42c0-b821-723afe805954" containerID="700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111" exitCode=0 Jan 30 06:45:55 crc kubenswrapper[4931]: I0130 06:45:55.345376 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ccfbfc65-5jz59" event={"ID":"e180809a-b692-42c0-b821-723afe805954","Type":"ContainerDied","Data":"700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111"} Jan 30 06:45:55 crc kubenswrapper[4931]: I0130 06:45:55.449562 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c67196-2e21-4ca1-81c6-ae1d0b68d461" path="/var/lib/kubelet/pods/a2c67196-2e21-4ca1-81c6-ae1d0b68d461/volumes" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.361619 4931 generic.go:334] "Generic (PLEG): container finished" podID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerID="0ac695e770aa1e4dca8f308d0921c4323577d50a3d4f87168b23f7117ef54020" exitCode=137 Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.362252 4931 generic.go:334] "Generic (PLEG): container finished" podID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerID="3f69e23981e22c16e2b3eb960775aef58a246bf4d2a400c78b43e6f979492f62" exitCode=137 Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.362289 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd79f5877-c4n2v" event={"ID":"00dba7a3-7492-4010-9931-1ed387dc22a7","Type":"ContainerDied","Data":"0ac695e770aa1e4dca8f308d0921c4323577d50a3d4f87168b23f7117ef54020"} Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.362328 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd79f5877-c4n2v" event={"ID":"00dba7a3-7492-4010-9931-1ed387dc22a7","Type":"ContainerDied","Data":"3f69e23981e22c16e2b3eb960775aef58a246bf4d2a400c78b43e6f979492f62"} Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.483850 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.657349 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00dba7a3-7492-4010-9931-1ed387dc22a7-logs\") pod \"00dba7a3-7492-4010-9931-1ed387dc22a7\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.657403 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00dba7a3-7492-4010-9931-1ed387dc22a7-horizon-secret-key\") pod \"00dba7a3-7492-4010-9931-1ed387dc22a7\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.657461 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-config-data\") pod \"00dba7a3-7492-4010-9931-1ed387dc22a7\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.657622 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/00dba7a3-7492-4010-9931-1ed387dc22a7-kube-api-access-l5pmn\") pod \"00dba7a3-7492-4010-9931-1ed387dc22a7\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.657706 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-scripts\") pod \"00dba7a3-7492-4010-9931-1ed387dc22a7\" (UID: \"00dba7a3-7492-4010-9931-1ed387dc22a7\") " Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.658583 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00dba7a3-7492-4010-9931-1ed387dc22a7-logs" (OuterVolumeSpecName: "logs") pod "00dba7a3-7492-4010-9931-1ed387dc22a7" (UID: "00dba7a3-7492-4010-9931-1ed387dc22a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.669629 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00dba7a3-7492-4010-9931-1ed387dc22a7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "00dba7a3-7492-4010-9931-1ed387dc22a7" (UID: "00dba7a3-7492-4010-9931-1ed387dc22a7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.671711 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00dba7a3-7492-4010-9931-1ed387dc22a7-kube-api-access-l5pmn" (OuterVolumeSpecName: "kube-api-access-l5pmn") pod "00dba7a3-7492-4010-9931-1ed387dc22a7" (UID: "00dba7a3-7492-4010-9931-1ed387dc22a7"). InnerVolumeSpecName "kube-api-access-l5pmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.687274 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-config-data" (OuterVolumeSpecName: "config-data") pod "00dba7a3-7492-4010-9931-1ed387dc22a7" (UID: "00dba7a3-7492-4010-9931-1ed387dc22a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.706953 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-scripts" (OuterVolumeSpecName: "scripts") pod "00dba7a3-7492-4010-9931-1ed387dc22a7" (UID: "00dba7a3-7492-4010-9931-1ed387dc22a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.760101 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5pmn\" (UniqueName: \"kubernetes.io/projected/00dba7a3-7492-4010-9931-1ed387dc22a7-kube-api-access-l5pmn\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.760134 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.760143 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00dba7a3-7492-4010-9931-1ed387dc22a7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.760580 4931 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/00dba7a3-7492-4010-9931-1ed387dc22a7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:56 crc kubenswrapper[4931]: I0130 06:45:56.760592 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/00dba7a3-7492-4010-9931-1ed387dc22a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:45:57 crc kubenswrapper[4931]: I0130 06:45:57.401276 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd79f5877-c4n2v" event={"ID":"00dba7a3-7492-4010-9931-1ed387dc22a7","Type":"ContainerDied","Data":"b88b4f7f43f890396ef5c3306741e800af49790969d1f953fb512e6e66e5766b"} Jan 30 06:45:57 crc kubenswrapper[4931]: I0130 06:45:57.402181 4931 scope.go:117] "RemoveContainer" containerID="0ac695e770aa1e4dca8f308d0921c4323577d50a3d4f87168b23f7117ef54020" Jan 30 06:45:57 crc kubenswrapper[4931]: I0130 06:45:57.401406 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd79f5877-c4n2v" Jan 30 06:45:57 crc kubenswrapper[4931]: I0130 06:45:57.478120 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fd79f5877-c4n2v"] Jan 30 06:45:57 crc kubenswrapper[4931]: I0130 06:45:57.492023 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fd79f5877-c4n2v"] Jan 30 06:45:57 crc kubenswrapper[4931]: I0130 06:45:57.494237 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-86ccfbfc65-5jz59" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 06:45:57 crc kubenswrapper[4931]: I0130 06:45:57.652247 4931 scope.go:117] "RemoveContainer" containerID="3f69e23981e22c16e2b3eb960775aef58a246bf4d2a400c78b43e6f979492f62" Jan 30 06:45:59 crc kubenswrapper[4931]: I0130 06:45:59.444891 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" path="/var/lib/kubelet/pods/00dba7a3-7492-4010-9931-1ed387dc22a7/volumes" Jan 30 06:46:00 crc kubenswrapper[4931]: I0130 06:46:00.829483 4931 scope.go:117] "RemoveContainer" containerID="18c71eb1241272ed04cbfce337c51a3320bfd0991c28ac36edc8dd0665668963" Jan 30 06:46:00 crc kubenswrapper[4931]: I0130 06:46:00.864696 4931 scope.go:117] "RemoveContainer" containerID="d12d2fc2afba982df405694f131f75a0e5433ce67ce82580ab99cf0746dfdbc2" Jan 30 06:46:00 crc kubenswrapper[4931]: I0130 06:46:00.906930 4931 scope.go:117] "RemoveContainer" containerID="2a367b7f63781dff8719e328044d9f7bfe39229339b2c9fd8828dc6b757b0a29" Jan 30 06:46:00 crc kubenswrapper[4931]: I0130 06:46:00.956595 4931 scope.go:117] "RemoveContainer" containerID="8641f9d89c670b316ae569c652c473fa47969340118c8804760552f9529867f0" Jan 30 06:46:00 crc kubenswrapper[4931]: I0130 06:46:00.994355 4931 scope.go:117] "RemoveContainer" containerID="c1b6dac81e48bbee78ee0997dd94e0a4ebd87490b839272e84bc72df445ca206" Jan 30 06:46:01 crc kubenswrapper[4931]: I0130 06:46:01.039778 4931 scope.go:117] "RemoveContainer" containerID="328e8eda0559ed6f531366255d38e56b8621e4607eb9add0633123842cfdda68" Jan 30 06:46:01 crc kubenswrapper[4931]: I0130 06:46:01.069357 4931 scope.go:117] "RemoveContainer" containerID="a418c4ea4e534161dc6a2d3882bfda776aacf77311c0dc409345881787f7574b" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.129081 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d459c77c7-fncxw"] Jan 30 06:46:05 crc kubenswrapper[4931]: E0130 06:46:05.130272 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.130290 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon" Jan 30 06:46:05 crc kubenswrapper[4931]: E0130 06:46:05.130322 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon-log" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.130333 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon-log" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.130609 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon-log" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.130632 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="00dba7a3-7492-4010-9931-1ed387dc22a7" containerName="horizon" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.131904 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.147519 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d459c77c7-fncxw"] Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.257086 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-horizon-secret-key\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.257161 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjvc\" (UniqueName: \"kubernetes.io/projected/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-kube-api-access-qpjvc\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.257190 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-scripts\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.257261 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-config-data\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.257327 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-logs\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.359072 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-horizon-secret-key\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.359159 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjvc\" (UniqueName: \"kubernetes.io/projected/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-kube-api-access-qpjvc\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.359186 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-scripts\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.359235 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-config-data\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.359300 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-logs\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.359807 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-logs\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.360545 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-scripts\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.361345 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-config-data\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.368037 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-horizon-secret-key\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.381773 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjvc\" (UniqueName: \"kubernetes.io/projected/4254a5c6-88bc-4b8f-a425-79d9bea9eb6d-kube-api-access-qpjvc\") pod \"horizon-5d459c77c7-fncxw\" (UID: \"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d\") " pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.431098 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:46:05 crc kubenswrapper[4931]: E0130 06:46:05.431534 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.468718 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:05 crc kubenswrapper[4931]: I0130 06:46:05.963946 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d459c77c7-fncxw"] Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.513614 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d459c77c7-fncxw" event={"ID":"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d","Type":"ContainerStarted","Data":"dae790fec84756ec577c5580230a30addbc905f61e2ddb53534192cbef4e266b"} Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.513939 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d459c77c7-fncxw" event={"ID":"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d","Type":"ContainerStarted","Data":"cec5e276d2533b840d8931e0ff453f085ac7186820b7dbf22e9c5b98a875574e"} Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.513955 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d459c77c7-fncxw" event={"ID":"4254a5c6-88bc-4b8f-a425-79d9bea9eb6d","Type":"ContainerStarted","Data":"e5ef97d6cee0490ed3dc2c6a36021e37dbe2a20b3c7fd83e92391cdd18ddab81"} Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.538129 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5d459c77c7-fncxw" podStartSLOduration=1.538103813 podStartE2EDuration="1.538103813s" podCreationTimestamp="2026-01-30 06:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:06.53228085 +0000 UTC m=+5901.902191107" watchObservedRunningTime="2026-01-30 06:46:06.538103813 +0000 UTC m=+5901.908014080" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.672273 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-md7t7"] Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.674008 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-md7t7" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.680058 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-md7t7"] Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.794320 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97hj\" (UniqueName: \"kubernetes.io/projected/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-kube-api-access-t97hj\") pod \"heat-db-create-md7t7\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " pod="openstack/heat-db-create-md7t7" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.794710 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-operator-scripts\") pod \"heat-db-create-md7t7\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " pod="openstack/heat-db-create-md7t7" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.876257 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-0207-account-create-update-nwwgb"] Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.881467 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.884934 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.895646 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-0207-account-create-update-nwwgb"] Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.896487 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97hj\" (UniqueName: \"kubernetes.io/projected/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-kube-api-access-t97hj\") pod \"heat-db-create-md7t7\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " pod="openstack/heat-db-create-md7t7" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.896550 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-operator-scripts\") pod \"heat-db-create-md7t7\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " pod="openstack/heat-db-create-md7t7" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.897459 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-operator-scripts\") pod \"heat-db-create-md7t7\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " pod="openstack/heat-db-create-md7t7" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.918977 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97hj\" (UniqueName: \"kubernetes.io/projected/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-kube-api-access-t97hj\") pod \"heat-db-create-md7t7\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " pod="openstack/heat-db-create-md7t7" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.998778 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbldw\" (UniqueName: \"kubernetes.io/projected/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-kube-api-access-wbldw\") pod \"heat-0207-account-create-update-nwwgb\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:06 crc kubenswrapper[4931]: I0130 06:46:06.999122 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-operator-scripts\") pod \"heat-0207-account-create-update-nwwgb\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.028472 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-md7t7" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.101410 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbldw\" (UniqueName: \"kubernetes.io/projected/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-kube-api-access-wbldw\") pod \"heat-0207-account-create-update-nwwgb\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.101575 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-operator-scripts\") pod \"heat-0207-account-create-update-nwwgb\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.102560 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-operator-scripts\") pod \"heat-0207-account-create-update-nwwgb\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.123525 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbldw\" (UniqueName: \"kubernetes.io/projected/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-kube-api-access-wbldw\") pod \"heat-0207-account-create-update-nwwgb\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.202486 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.494490 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-86ccfbfc65-5jz59" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.538495 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-md7t7"] Jan 30 06:46:07 crc kubenswrapper[4931]: W0130 06:46:07.541620 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65b44b5a_7476_44a4_b7ca_e6c246e9afdc.slice/crio-3d55ce172c678f76fc2348dc3f38851f5e702decefa7e63fc458f4461ec77231 WatchSource:0}: Error finding container 3d55ce172c678f76fc2348dc3f38851f5e702decefa7e63fc458f4461ec77231: Status 404 returned error can't find the container with id 3d55ce172c678f76fc2348dc3f38851f5e702decefa7e63fc458f4461ec77231 Jan 30 06:46:07 crc kubenswrapper[4931]: I0130 06:46:07.699404 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-0207-account-create-update-nwwgb"] Jan 30 06:46:08 crc kubenswrapper[4931]: I0130 06:46:08.535675 4931 generic.go:334] "Generic (PLEG): container finished" podID="4110f6ea-5daa-4a1f-8fc2-f9497b7024f7" containerID="0c6e2269ccd94b91b1bc61c0d6038a0f312e1ff979dd42b9e25c99edd027ce3a" exitCode=0 Jan 30 06:46:08 crc kubenswrapper[4931]: I0130 06:46:08.536384 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0207-account-create-update-nwwgb" event={"ID":"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7","Type":"ContainerDied","Data":"0c6e2269ccd94b91b1bc61c0d6038a0f312e1ff979dd42b9e25c99edd027ce3a"} Jan 30 06:46:08 crc kubenswrapper[4931]: I0130 06:46:08.536442 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0207-account-create-update-nwwgb" event={"ID":"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7","Type":"ContainerStarted","Data":"7759660487bc155207cd5eb26c0d9fbddda7ab8c7b320ed171716e13152c14b8"} Jan 30 06:46:08 crc kubenswrapper[4931]: I0130 06:46:08.540162 4931 generic.go:334] "Generic (PLEG): container finished" podID="65b44b5a-7476-44a4-b7ca-e6c246e9afdc" containerID="461da2cabb65077a09c290e33233aae28ff5843458cdbe68b5fe17f6c78dd05f" exitCode=0 Jan 30 06:46:08 crc kubenswrapper[4931]: I0130 06:46:08.540208 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-md7t7" event={"ID":"65b44b5a-7476-44a4-b7ca-e6c246e9afdc","Type":"ContainerDied","Data":"461da2cabb65077a09c290e33233aae28ff5843458cdbe68b5fe17f6c78dd05f"} Jan 30 06:46:08 crc kubenswrapper[4931]: I0130 06:46:08.540238 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-md7t7" event={"ID":"65b44b5a-7476-44a4-b7ca-e6c246e9afdc","Type":"ContainerStarted","Data":"3d55ce172c678f76fc2348dc3f38851f5e702decefa7e63fc458f4461ec77231"} Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.071530 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.099955 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-md7t7" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.175165 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-operator-scripts\") pod \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.175262 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97hj\" (UniqueName: \"kubernetes.io/projected/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-kube-api-access-t97hj\") pod \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\" (UID: \"65b44b5a-7476-44a4-b7ca-e6c246e9afdc\") " Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.175349 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbldw\" (UniqueName: \"kubernetes.io/projected/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-kube-api-access-wbldw\") pod \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.175394 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-operator-scripts\") pod \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\" (UID: \"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7\") " Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.176384 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65b44b5a-7476-44a4-b7ca-e6c246e9afdc" (UID: "65b44b5a-7476-44a4-b7ca-e6c246e9afdc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.176413 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4110f6ea-5daa-4a1f-8fc2-f9497b7024f7" (UID: "4110f6ea-5daa-4a1f-8fc2-f9497b7024f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.183016 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-kube-api-access-wbldw" (OuterVolumeSpecName: "kube-api-access-wbldw") pod "4110f6ea-5daa-4a1f-8fc2-f9497b7024f7" (UID: "4110f6ea-5daa-4a1f-8fc2-f9497b7024f7"). InnerVolumeSpecName "kube-api-access-wbldw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.183604 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-kube-api-access-t97hj" (OuterVolumeSpecName: "kube-api-access-t97hj") pod "65b44b5a-7476-44a4-b7ca-e6c246e9afdc" (UID: "65b44b5a-7476-44a4-b7ca-e6c246e9afdc"). InnerVolumeSpecName "kube-api-access-t97hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.277857 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.277900 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t97hj\" (UniqueName: \"kubernetes.io/projected/65b44b5a-7476-44a4-b7ca-e6c246e9afdc-kube-api-access-t97hj\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.277915 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbldw\" (UniqueName: \"kubernetes.io/projected/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-kube-api-access-wbldw\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.277926 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.562847 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-0207-account-create-update-nwwgb" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.562868 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-0207-account-create-update-nwwgb" event={"ID":"4110f6ea-5daa-4a1f-8fc2-f9497b7024f7","Type":"ContainerDied","Data":"7759660487bc155207cd5eb26c0d9fbddda7ab8c7b320ed171716e13152c14b8"} Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.562911 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7759660487bc155207cd5eb26c0d9fbddda7ab8c7b320ed171716e13152c14b8" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.564542 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-md7t7" event={"ID":"65b44b5a-7476-44a4-b7ca-e6c246e9afdc","Type":"ContainerDied","Data":"3d55ce172c678f76fc2348dc3f38851f5e702decefa7e63fc458f4461ec77231"} Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.564569 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d55ce172c678f76fc2348dc3f38851f5e702decefa7e63fc458f4461ec77231" Jan 30 06:46:10 crc kubenswrapper[4931]: I0130 06:46:10.564820 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-md7t7" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.035640 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-t75hv"] Jan 30 06:46:12 crc kubenswrapper[4931]: E0130 06:46:12.036484 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4110f6ea-5daa-4a1f-8fc2-f9497b7024f7" containerName="mariadb-account-create-update" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.036504 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4110f6ea-5daa-4a1f-8fc2-f9497b7024f7" containerName="mariadb-account-create-update" Jan 30 06:46:12 crc kubenswrapper[4931]: E0130 06:46:12.036525 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b44b5a-7476-44a4-b7ca-e6c246e9afdc" containerName="mariadb-database-create" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.036557 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b44b5a-7476-44a4-b7ca-e6c246e9afdc" containerName="mariadb-database-create" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.036927 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4110f6ea-5daa-4a1f-8fc2-f9497b7024f7" containerName="mariadb-account-create-update" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.036965 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b44b5a-7476-44a4-b7ca-e6c246e9afdc" containerName="mariadb-database-create" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.037956 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.044179 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.044401 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rpnp9" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.054825 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-t75hv"] Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.120260 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5rzw\" (UniqueName: \"kubernetes.io/projected/eae9c157-1120-45ac-8d6c-cc417f364b1f-kube-api-access-p5rzw\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.120852 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-combined-ca-bundle\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.120895 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-config-data\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.228259 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rzw\" (UniqueName: \"kubernetes.io/projected/eae9c157-1120-45ac-8d6c-cc417f364b1f-kube-api-access-p5rzw\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.228345 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-combined-ca-bundle\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.228381 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-config-data\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.234926 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-config-data\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.237689 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-combined-ca-bundle\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.247601 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5rzw\" (UniqueName: \"kubernetes.io/projected/eae9c157-1120-45ac-8d6c-cc417f364b1f-kube-api-access-p5rzw\") pod \"heat-db-sync-t75hv\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.379227 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:12 crc kubenswrapper[4931]: I0130 06:46:12.876597 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-t75hv"] Jan 30 06:46:13 crc kubenswrapper[4931]: I0130 06:46:13.608244 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t75hv" event={"ID":"eae9c157-1120-45ac-8d6c-cc417f364b1f","Type":"ContainerStarted","Data":"ec88771f0c85057efcfc2cbe78e40d1ed716715f7c536bdc884c1a61abdb4693"} Jan 30 06:46:15 crc kubenswrapper[4931]: I0130 06:46:15.469869 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:15 crc kubenswrapper[4931]: I0130 06:46:15.470217 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:16 crc kubenswrapper[4931]: I0130 06:46:16.422286 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:46:16 crc kubenswrapper[4931]: E0130 06:46:16.423065 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:46:17 crc kubenswrapper[4931]: I0130 06:46:17.493788 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-86ccfbfc65-5jz59" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 30 06:46:17 crc kubenswrapper[4931]: I0130 06:46:17.494217 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:46:19 crc kubenswrapper[4931]: I0130 06:46:19.688065 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t75hv" event={"ID":"eae9c157-1120-45ac-8d6c-cc417f364b1f","Type":"ContainerStarted","Data":"21f9a5808ddea5a133a2d53b441ae213a916b33040640a7576f4d7163df3f19d"} Jan 30 06:46:19 crc kubenswrapper[4931]: I0130 06:46:19.719765 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-t75hv" podStartSLOduration=1.5889306159999999 podStartE2EDuration="7.719745123s" podCreationTimestamp="2026-01-30 06:46:12 +0000 UTC" firstStartedPulling="2026-01-30 06:46:12.880099797 +0000 UTC m=+5908.250010094" lastFinishedPulling="2026-01-30 06:46:19.010914304 +0000 UTC m=+5914.380824601" observedRunningTime="2026-01-30 06:46:19.712946783 +0000 UTC m=+5915.082857070" watchObservedRunningTime="2026-01-30 06:46:19.719745123 +0000 UTC m=+5915.089655400" Jan 30 06:46:21 crc kubenswrapper[4931]: I0130 06:46:21.713179 4931 generic.go:334] "Generic (PLEG): container finished" podID="eae9c157-1120-45ac-8d6c-cc417f364b1f" containerID="21f9a5808ddea5a133a2d53b441ae213a916b33040640a7576f4d7163df3f19d" exitCode=0 Jan 30 06:46:21 crc kubenswrapper[4931]: I0130 06:46:21.713221 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t75hv" event={"ID":"eae9c157-1120-45ac-8d6c-cc417f364b1f","Type":"ContainerDied","Data":"21f9a5808ddea5a133a2d53b441ae213a916b33040640a7576f4d7163df3f19d"} Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.027488 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.046483 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e180809a-b692-42c0-b821-723afe805954-horizon-secret-key\") pod \"e180809a-b692-42c0-b821-723afe805954\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.046957 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-scripts\") pod \"e180809a-b692-42c0-b821-723afe805954\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.047067 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-config-data\") pod \"e180809a-b692-42c0-b821-723afe805954\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.047136 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e180809a-b692-42c0-b821-723afe805954-logs\") pod \"e180809a-b692-42c0-b821-723afe805954\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.047181 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hdtv\" (UniqueName: \"kubernetes.io/projected/e180809a-b692-42c0-b821-723afe805954-kube-api-access-7hdtv\") pod \"e180809a-b692-42c0-b821-723afe805954\" (UID: \"e180809a-b692-42c0-b821-723afe805954\") " Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.051531 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e180809a-b692-42c0-b821-723afe805954-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e180809a-b692-42c0-b821-723afe805954" (UID: "e180809a-b692-42c0-b821-723afe805954"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.051905 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e180809a-b692-42c0-b821-723afe805954-logs" (OuterVolumeSpecName: "logs") pod "e180809a-b692-42c0-b821-723afe805954" (UID: "e180809a-b692-42c0-b821-723afe805954"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.056218 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e180809a-b692-42c0-b821-723afe805954-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.056587 4931 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e180809a-b692-42c0-b821-723afe805954-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.109195 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e180809a-b692-42c0-b821-723afe805954-kube-api-access-7hdtv" (OuterVolumeSpecName: "kube-api-access-7hdtv") pod "e180809a-b692-42c0-b821-723afe805954" (UID: "e180809a-b692-42c0-b821-723afe805954"). InnerVolumeSpecName "kube-api-access-7hdtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.110160 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-scripts" (OuterVolumeSpecName: "scripts") pod "e180809a-b692-42c0-b821-723afe805954" (UID: "e180809a-b692-42c0-b821-723afe805954"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.112487 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-config-data" (OuterVolumeSpecName: "config-data") pod "e180809a-b692-42c0-b821-723afe805954" (UID: "e180809a-b692-42c0-b821-723afe805954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.158468 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.158520 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hdtv\" (UniqueName: \"kubernetes.io/projected/e180809a-b692-42c0-b821-723afe805954-kube-api-access-7hdtv\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.158539 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e180809a-b692-42c0-b821-723afe805954-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.733946 4931 generic.go:334] "Generic (PLEG): container finished" podID="e180809a-b692-42c0-b821-723afe805954" containerID="72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0" exitCode=137 Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.734016 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ccfbfc65-5jz59" event={"ID":"e180809a-b692-42c0-b821-723afe805954","Type":"ContainerDied","Data":"72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0"} Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.734050 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86ccfbfc65-5jz59" event={"ID":"e180809a-b692-42c0-b821-723afe805954","Type":"ContainerDied","Data":"6e7fd796d4cb0d311b3d0a188a2af32435b8c86d4ea09b091cca87d30bbf2b5d"} Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.734069 4931 scope.go:117] "RemoveContainer" containerID="700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.733993 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86ccfbfc65-5jz59" Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.807976 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86ccfbfc65-5jz59"] Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.817905 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86ccfbfc65-5jz59"] Jan 30 06:46:22 crc kubenswrapper[4931]: I0130 06:46:22.985032 4931 scope.go:117] "RemoveContainer" containerID="72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.010962 4931 scope.go:117] "RemoveContainer" containerID="700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111" Jan 30 06:46:23 crc kubenswrapper[4931]: E0130 06:46:23.011702 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111\": container with ID starting with 700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111 not found: ID does not exist" containerID="700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.011733 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111"} err="failed to get container status \"700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111\": rpc error: code = NotFound desc = could not find container \"700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111\": container with ID starting with 700676473cd61ebacb9e15a0a971ab42c8f08fbaa84f601ba058ff31fbcd8111 not found: ID does not exist" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.011782 4931 scope.go:117] "RemoveContainer" containerID="72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0" Jan 30 06:46:23 crc kubenswrapper[4931]: E0130 06:46:23.012199 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0\": container with ID starting with 72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0 not found: ID does not exist" containerID="72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.012231 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0"} err="failed to get container status \"72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0\": rpc error: code = NotFound desc = could not find container \"72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0\": container with ID starting with 72e47d4ade77f5ded5f7a763729bf9cc9f93500e98e8c475f9d41f7b996918d0 not found: ID does not exist" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.107267 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.287469 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-combined-ca-bundle\") pod \"eae9c157-1120-45ac-8d6c-cc417f364b1f\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.287564 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5rzw\" (UniqueName: \"kubernetes.io/projected/eae9c157-1120-45ac-8d6c-cc417f364b1f-kube-api-access-p5rzw\") pod \"eae9c157-1120-45ac-8d6c-cc417f364b1f\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.287776 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-config-data\") pod \"eae9c157-1120-45ac-8d6c-cc417f364b1f\" (UID: \"eae9c157-1120-45ac-8d6c-cc417f364b1f\") " Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.294875 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae9c157-1120-45ac-8d6c-cc417f364b1f-kube-api-access-p5rzw" (OuterVolumeSpecName: "kube-api-access-p5rzw") pod "eae9c157-1120-45ac-8d6c-cc417f364b1f" (UID: "eae9c157-1120-45ac-8d6c-cc417f364b1f"). InnerVolumeSpecName "kube-api-access-p5rzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.345389 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eae9c157-1120-45ac-8d6c-cc417f364b1f" (UID: "eae9c157-1120-45ac-8d6c-cc417f364b1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.391652 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5rzw\" (UniqueName: \"kubernetes.io/projected/eae9c157-1120-45ac-8d6c-cc417f364b1f-kube-api-access-p5rzw\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.392106 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.409155 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-config-data" (OuterVolumeSpecName: "config-data") pod "eae9c157-1120-45ac-8d6c-cc417f364b1f" (UID: "eae9c157-1120-45ac-8d6c-cc417f364b1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.440780 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e180809a-b692-42c0-b821-723afe805954" path="/var/lib/kubelet/pods/e180809a-b692-42c0-b821-723afe805954/volumes" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.494077 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eae9c157-1120-45ac-8d6c-cc417f364b1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.745286 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t75hv" event={"ID":"eae9c157-1120-45ac-8d6c-cc417f364b1f","Type":"ContainerDied","Data":"ec88771f0c85057efcfc2cbe78e40d1ed716715f7c536bdc884c1a61abdb4693"} Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.745329 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec88771f0c85057efcfc2cbe78e40d1ed716715f7c536bdc884c1a61abdb4693" Jan 30 06:46:23 crc kubenswrapper[4931]: I0130 06:46:23.745386 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t75hv" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.025592 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6d6f44c564-6wts7"] Jan 30 06:46:25 crc kubenswrapper[4931]: E0130 06:46:25.026550 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.026588 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" Jan 30 06:46:25 crc kubenswrapper[4931]: E0130 06:46:25.026620 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae9c157-1120-45ac-8d6c-cc417f364b1f" containerName="heat-db-sync" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.026631 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae9c157-1120-45ac-8d6c-cc417f364b1f" containerName="heat-db-sync" Jan 30 06:46:25 crc kubenswrapper[4931]: E0130 06:46:25.026652 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon-log" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.026664 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon-log" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.026988 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.027019 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae9c157-1120-45ac-8d6c-cc417f364b1f" containerName="heat-db-sync" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.027054 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="e180809a-b692-42c0-b821-723afe805954" containerName="horizon-log" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.028138 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.032291 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.032760 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.044392 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rpnp9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.064847 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d6f44c564-6wts7"] Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.128857 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-config-data-custom\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.128945 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrq8\" (UniqueName: \"kubernetes.io/projected/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-kube-api-access-sbrq8\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.128976 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-config-data\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.129073 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-combined-ca-bundle\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.231533 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-config-data\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.231676 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-combined-ca-bundle\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.231717 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-config-data-custom\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.231794 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrq8\" (UniqueName: \"kubernetes.io/projected/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-kube-api-access-sbrq8\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.240559 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-config-data\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.245255 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-combined-ca-bundle\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.249972 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-config-data-custom\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.251170 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrq8\" (UniqueName: \"kubernetes.io/projected/78cdbc3b-0ff9-4204-b62e-bc784e3fcb87-kube-api-access-sbrq8\") pod \"heat-engine-6d6f44c564-6wts7\" (UID: \"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87\") " pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.321121 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-795f886c68-gphf9"] Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.322358 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.329329 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.337085 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7944f98bdf-sfnzs"] Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.338245 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.341718 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.365498 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.376076 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7944f98bdf-sfnzs"] Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.385218 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-795f886c68-gphf9"] Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438081 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-config-data\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438135 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-config-data-custom\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438155 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td62v\" (UniqueName: \"kubernetes.io/projected/7094dd36-79d9-4c63-9441-1753815af4a7-kube-api-access-td62v\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438304 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5gmh\" (UniqueName: \"kubernetes.io/projected/e3a9064f-a3e2-4734-8b77-9e42deff080a-kube-api-access-p5gmh\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438388 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-config-data\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438410 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-combined-ca-bundle\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438527 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-config-data-custom\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.438613 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-combined-ca-bundle\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.541895 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-config-data\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.542011 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-combined-ca-bundle\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.542262 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-config-data-custom\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.542388 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-combined-ca-bundle\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.542599 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-config-data\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.542677 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-config-data-custom\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.542719 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td62v\" (UniqueName: \"kubernetes.io/projected/7094dd36-79d9-4c63-9441-1753815af4a7-kube-api-access-td62v\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.542802 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5gmh\" (UniqueName: \"kubernetes.io/projected/e3a9064f-a3e2-4734-8b77-9e42deff080a-kube-api-access-p5gmh\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.555560 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-config-data\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.563881 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-config-data-custom\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.569351 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-config-data\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.570279 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-config-data-custom\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.574608 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7094dd36-79d9-4c63-9441-1753815af4a7-combined-ca-bundle\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.581225 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5gmh\" (UniqueName: \"kubernetes.io/projected/e3a9064f-a3e2-4734-8b77-9e42deff080a-kube-api-access-p5gmh\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.583868 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a9064f-a3e2-4734-8b77-9e42deff080a-combined-ca-bundle\") pod \"heat-api-795f886c68-gphf9\" (UID: \"e3a9064f-a3e2-4734-8b77-9e42deff080a\") " pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.585143 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td62v\" (UniqueName: \"kubernetes.io/projected/7094dd36-79d9-4c63-9441-1753815af4a7-kube-api-access-td62v\") pod \"heat-cfnapi-7944f98bdf-sfnzs\" (UID: \"7094dd36-79d9-4c63-9441-1753815af4a7\") " pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.655870 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.703809 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:25 crc kubenswrapper[4931]: I0130 06:46:25.921051 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d6f44c564-6wts7"] Jan 30 06:46:25 crc kubenswrapper[4931]: W0130 06:46:25.926524 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78cdbc3b_0ff9_4204_b62e_bc784e3fcb87.slice/crio-f834194230f523ed8c30e4b0b48089f146097cc3b4f36bfbae6c0dcffaecca47 WatchSource:0}: Error finding container f834194230f523ed8c30e4b0b48089f146097cc3b4f36bfbae6c0dcffaecca47: Status 404 returned error can't find the container with id f834194230f523ed8c30e4b0b48089f146097cc3b4f36bfbae6c0dcffaecca47 Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.110730 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-795f886c68-gphf9"] Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.250193 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7944f98bdf-sfnzs"] Jan 30 06:46:26 crc kubenswrapper[4931]: W0130 06:46:26.251221 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7094dd36_79d9_4c63_9441_1753815af4a7.slice/crio-3b56b4340ea6265de44572bd720f7b92a4cfe2b0d6f5d3e3e70e9928348f9997 WatchSource:0}: Error finding container 3b56b4340ea6265de44572bd720f7b92a4cfe2b0d6f5d3e3e70e9928348f9997: Status 404 returned error can't find the container with id 3b56b4340ea6265de44572bd720f7b92a4cfe2b0d6f5d3e3e70e9928348f9997 Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.819696 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" event={"ID":"7094dd36-79d9-4c63-9441-1753815af4a7","Type":"ContainerStarted","Data":"3b56b4340ea6265de44572bd720f7b92a4cfe2b0d6f5d3e3e70e9928348f9997"} Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.822958 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d6f44c564-6wts7" event={"ID":"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87","Type":"ContainerStarted","Data":"c10d223cc51544d9fecfc8f7ac0fce0fb40f94528d9ce0fb8a1ef18e35c5cb6d"} Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.823016 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d6f44c564-6wts7" event={"ID":"78cdbc3b-0ff9-4204-b62e-bc784e3fcb87","Type":"ContainerStarted","Data":"f834194230f523ed8c30e4b0b48089f146097cc3b4f36bfbae6c0dcffaecca47"} Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.823265 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.825083 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-795f886c68-gphf9" event={"ID":"e3a9064f-a3e2-4734-8b77-9e42deff080a","Type":"ContainerStarted","Data":"cdb4f4fff3ef66eb047fc0fad5ba4ec59c5df1bd897863a7040dd254520fe016"} Jan 30 06:46:26 crc kubenswrapper[4931]: I0130 06:46:26.840071 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6d6f44c564-6wts7" podStartSLOduration=2.8400480310000003 podStartE2EDuration="2.840048031s" podCreationTimestamp="2026-01-30 06:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:46:26.836953684 +0000 UTC m=+5922.206863951" watchObservedRunningTime="2026-01-30 06:46:26.840048031 +0000 UTC m=+5922.209958288" Jan 30 06:46:27 crc kubenswrapper[4931]: I0130 06:46:27.383724 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.103812 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5d459c77c7-fncxw" Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.162401 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b8f5df775-m6dvd"] Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.162753 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b8f5df775-m6dvd" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon-log" containerID="cri-o://16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb" gracePeriod=30 Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.162895 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b8f5df775-m6dvd" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" containerID="cri-o://85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88" gracePeriod=30 Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.853230 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" event={"ID":"7094dd36-79d9-4c63-9441-1753815af4a7","Type":"ContainerStarted","Data":"a8b2f0d8514937b1d4f06d703fc47131681d783d24f8dfbebd8154ecbfd67779"} Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.853889 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.855583 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-795f886c68-gphf9" event={"ID":"e3a9064f-a3e2-4734-8b77-9e42deff080a","Type":"ContainerStarted","Data":"61ced90ca96268bddf1abf323519f365962314a7c94828080c8fa1faf48a1a8a"} Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.856111 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.879078 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" podStartSLOduration=2.221850973 podStartE2EDuration="4.879058611s" podCreationTimestamp="2026-01-30 06:46:25 +0000 UTC" firstStartedPulling="2026-01-30 06:46:26.253880923 +0000 UTC m=+5921.623791180" lastFinishedPulling="2026-01-30 06:46:28.911088561 +0000 UTC m=+5924.280998818" observedRunningTime="2026-01-30 06:46:29.869291598 +0000 UTC m=+5925.239201855" watchObservedRunningTime="2026-01-30 06:46:29.879058611 +0000 UTC m=+5925.248968878" Jan 30 06:46:29 crc kubenswrapper[4931]: I0130 06:46:29.896605 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-795f886c68-gphf9" podStartSLOduration=2.108495972 podStartE2EDuration="4.896580131s" podCreationTimestamp="2026-01-30 06:46:25 +0000 UTC" firstStartedPulling="2026-01-30 06:46:26.121729026 +0000 UTC m=+5921.491639283" lastFinishedPulling="2026-01-30 06:46:28.909813185 +0000 UTC m=+5924.279723442" observedRunningTime="2026-01-30 06:46:29.89009039 +0000 UTC m=+5925.260000657" watchObservedRunningTime="2026-01-30 06:46:29.896580131 +0000 UTC m=+5925.266490388" Jan 30 06:46:31 crc kubenswrapper[4931]: I0130 06:46:31.422304 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:46:31 crc kubenswrapper[4931]: E0130 06:46:31.422759 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:46:32 crc kubenswrapper[4931]: I0130 06:46:32.893387 4931 generic.go:334] "Generic (PLEG): container finished" podID="1d524b32-d060-41f3-88a6-d5339c438fff" containerID="85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88" exitCode=0 Jan 30 06:46:32 crc kubenswrapper[4931]: I0130 06:46:32.893757 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8f5df775-m6dvd" event={"ID":"1d524b32-d060-41f3-88a6-d5339c438fff","Type":"ContainerDied","Data":"85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88"} Jan 30 06:46:36 crc kubenswrapper[4931]: I0130 06:46:36.897818 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-795f886c68-gphf9" Jan 30 06:46:37 crc kubenswrapper[4931]: I0130 06:46:37.050721 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8be9-account-create-update-4qptt"] Jan 30 06:46:37 crc kubenswrapper[4931]: I0130 06:46:37.060499 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fcgh6"] Jan 30 06:46:37 crc kubenswrapper[4931]: I0130 06:46:37.073522 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fcgh6"] Jan 30 06:46:37 crc kubenswrapper[4931]: I0130 06:46:37.083295 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7944f98bdf-sfnzs" Jan 30 06:46:37 crc kubenswrapper[4931]: I0130 06:46:37.085794 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8be9-account-create-update-4qptt"] Jan 30 06:46:37 crc kubenswrapper[4931]: I0130 06:46:37.436182 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cc38ea-1412-4e17-9c74-779b7c6d701c" path="/var/lib/kubelet/pods/f6cc38ea-1412-4e17-9c74-779b7c6d701c/volumes" Jan 30 06:46:37 crc kubenswrapper[4931]: I0130 06:46:37.436826 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe5a82c2-728c-40a6-83b0-37ba70d84931" path="/var/lib/kubelet/pods/fe5a82c2-728c-40a6-83b0-37ba70d84931/volumes" Jan 30 06:46:38 crc kubenswrapper[4931]: I0130 06:46:38.084956 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b8f5df775-m6dvd" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 06:46:43 crc kubenswrapper[4931]: I0130 06:46:43.423622 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:46:43 crc kubenswrapper[4931]: E0130 06:46:43.426347 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:46:44 crc kubenswrapper[4931]: I0130 06:46:44.047140 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-529p5"] Jan 30 06:46:44 crc kubenswrapper[4931]: I0130 06:46:44.059626 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-529p5"] Jan 30 06:46:45 crc kubenswrapper[4931]: I0130 06:46:45.408750 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6d6f44c564-6wts7" Jan 30 06:46:45 crc kubenswrapper[4931]: I0130 06:46:45.456204 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff91271-f1e2-4aaf-adec-bc61ce9dedad" path="/var/lib/kubelet/pods/bff91271-f1e2-4aaf-adec-bc61ce9dedad/volumes" Jan 30 06:46:48 crc kubenswrapper[4931]: I0130 06:46:48.085366 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b8f5df775-m6dvd" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 06:46:54 crc kubenswrapper[4931]: I0130 06:46:54.422031 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:46:54 crc kubenswrapper[4931]: E0130 06:46:54.422972 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:46:58 crc kubenswrapper[4931]: I0130 06:46:58.085074 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7b8f5df775-m6dvd" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.116:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.116:8080: connect: connection refused" Jan 30 06:46:58 crc kubenswrapper[4931]: I0130 06:46:58.085703 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.636410 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.747371 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-config-data\") pod \"1d524b32-d060-41f3-88a6-d5339c438fff\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.747451 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-scripts\") pod \"1d524b32-d060-41f3-88a6-d5339c438fff\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.747545 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d524b32-d060-41f3-88a6-d5339c438fff-logs\") pod \"1d524b32-d060-41f3-88a6-d5339c438fff\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.747643 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-477f8\" (UniqueName: \"kubernetes.io/projected/1d524b32-d060-41f3-88a6-d5339c438fff-kube-api-access-477f8\") pod \"1d524b32-d060-41f3-88a6-d5339c438fff\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.747701 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d524b32-d060-41f3-88a6-d5339c438fff-horizon-secret-key\") pod \"1d524b32-d060-41f3-88a6-d5339c438fff\" (UID: \"1d524b32-d060-41f3-88a6-d5339c438fff\") " Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.748055 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d524b32-d060-41f3-88a6-d5339c438fff-logs" (OuterVolumeSpecName: "logs") pod "1d524b32-d060-41f3-88a6-d5339c438fff" (UID: "1d524b32-d060-41f3-88a6-d5339c438fff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.754139 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d524b32-d060-41f3-88a6-d5339c438fff-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1d524b32-d060-41f3-88a6-d5339c438fff" (UID: "1d524b32-d060-41f3-88a6-d5339c438fff"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.754694 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d524b32-d060-41f3-88a6-d5339c438fff-kube-api-access-477f8" (OuterVolumeSpecName: "kube-api-access-477f8") pod "1d524b32-d060-41f3-88a6-d5339c438fff" (UID: "1d524b32-d060-41f3-88a6-d5339c438fff"). InnerVolumeSpecName "kube-api-access-477f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.776521 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-config-data" (OuterVolumeSpecName: "config-data") pod "1d524b32-d060-41f3-88a6-d5339c438fff" (UID: "1d524b32-d060-41f3-88a6-d5339c438fff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.776568 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-scripts" (OuterVolumeSpecName: "scripts") pod "1d524b32-d060-41f3-88a6-d5339c438fff" (UID: "1d524b32-d060-41f3-88a6-d5339c438fff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.850018 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.850051 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d524b32-d060-41f3-88a6-d5339c438fff-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.850061 4931 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d524b32-d060-41f3-88a6-d5339c438fff-logs\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.850071 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-477f8\" (UniqueName: \"kubernetes.io/projected/1d524b32-d060-41f3-88a6-d5339c438fff-kube-api-access-477f8\") on node \"crc\" DevicePath \"\"" Jan 30 06:46:59 crc kubenswrapper[4931]: I0130 06:46:59.850083 4931 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d524b32-d060-41f3-88a6-d5339c438fff-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.200240 4931 generic.go:334] "Generic (PLEG): container finished" podID="1d524b32-d060-41f3-88a6-d5339c438fff" containerID="16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb" exitCode=137 Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.200276 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b8f5df775-m6dvd" Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.200293 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8f5df775-m6dvd" event={"ID":"1d524b32-d060-41f3-88a6-d5339c438fff","Type":"ContainerDied","Data":"16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb"} Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.200868 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8f5df775-m6dvd" event={"ID":"1d524b32-d060-41f3-88a6-d5339c438fff","Type":"ContainerDied","Data":"b4e9652d680d0769885e340ea1c5e364bdd99c6ea6b532de8c239df55ba5c48d"} Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.200885 4931 scope.go:117] "RemoveContainer" containerID="85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88" Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.248386 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b8f5df775-m6dvd"] Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.255796 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b8f5df775-m6dvd"] Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.385788 4931 scope.go:117] "RemoveContainer" containerID="16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb" Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.412764 4931 scope.go:117] "RemoveContainer" containerID="85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88" Jan 30 06:47:00 crc kubenswrapper[4931]: E0130 06:47:00.413283 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88\": container with ID starting with 85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88 not found: ID does not exist" containerID="85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88" Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.413328 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88"} err="failed to get container status \"85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88\": rpc error: code = NotFound desc = could not find container \"85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88\": container with ID starting with 85548fef8024dea6f8f6115f0c41910b99449a559d12a7ba0b4376b470f15f88 not found: ID does not exist" Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.413352 4931 scope.go:117] "RemoveContainer" containerID="16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb" Jan 30 06:47:00 crc kubenswrapper[4931]: E0130 06:47:00.413824 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb\": container with ID starting with 16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb not found: ID does not exist" containerID="16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb" Jan 30 06:47:00 crc kubenswrapper[4931]: I0130 06:47:00.413849 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb"} err="failed to get container status \"16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb\": rpc error: code = NotFound desc = could not find container \"16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb\": container with ID starting with 16d9ec94d86ff7aef0c860f5097848e89654e8c653024be93461df34e92fa0cb not found: ID does not exist" Jan 30 06:47:01 crc kubenswrapper[4931]: I0130 06:47:01.305195 4931 scope.go:117] "RemoveContainer" containerID="6f79520471e9a429df8d71872cafba7c48f1385750af4061f0e5ea5c4355f53e" Jan 30 06:47:01 crc kubenswrapper[4931]: I0130 06:47:01.338335 4931 scope.go:117] "RemoveContainer" containerID="b12392121e0278ef6aaee0ef2cb91f20ce791df236403c3611d10649bcb909d3" Jan 30 06:47:01 crc kubenswrapper[4931]: I0130 06:47:01.373099 4931 scope.go:117] "RemoveContainer" containerID="4711e717af206225417ee23e6a5a6867fd0fca04b0c1bb798437c5d765e9f38b" Jan 30 06:47:01 crc kubenswrapper[4931]: I0130 06:47:01.431959 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" path="/var/lib/kubelet/pods/1d524b32-d060-41f3-88a6-d5339c438fff/volumes" Jan 30 06:47:05 crc kubenswrapper[4931]: I0130 06:47:05.432824 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:47:06 crc kubenswrapper[4931]: I0130 06:47:06.267048 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"fb726f99bee299533a0b10daf42c4d3c80f89e1b2459842bb36e1df7a3f9faa7"} Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.212041 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll"] Jan 30 06:47:08 crc kubenswrapper[4931]: E0130 06:47:08.212954 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon-log" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.212966 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon-log" Jan 30 06:47:08 crc kubenswrapper[4931]: E0130 06:47:08.212978 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.212986 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.213175 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.213200 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d524b32-d060-41f3-88a6-d5339c438fff" containerName="horizon-log" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.214654 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.216573 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.232876 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll"] Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.333455 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmcjq\" (UniqueName: \"kubernetes.io/projected/8db6c802-44ea-48b4-a63f-c6c43492e6bc-kube-api-access-lmcjq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.333725 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.334103 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.438492 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.438617 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.438640 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmcjq\" (UniqueName: \"kubernetes.io/projected/8db6c802-44ea-48b4-a63f-c6c43492e6bc-kube-api-access-lmcjq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.439381 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.439672 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.490226 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmcjq\" (UniqueName: \"kubernetes.io/projected/8db6c802-44ea-48b4-a63f-c6c43492e6bc-kube-api-access-lmcjq\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:08 crc kubenswrapper[4931]: I0130 06:47:08.532846 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:09 crc kubenswrapper[4931]: I0130 06:47:09.081503 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll"] Jan 30 06:47:09 crc kubenswrapper[4931]: I0130 06:47:09.294162 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" event={"ID":"8db6c802-44ea-48b4-a63f-c6c43492e6bc","Type":"ContainerStarted","Data":"a25c57f5fdcd778577fea8d3560ffb6508c8c6a43e5aa9b1c708756b6a5b4cda"} Jan 30 06:47:09 crc kubenswrapper[4931]: I0130 06:47:09.294541 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" event={"ID":"8db6c802-44ea-48b4-a63f-c6c43492e6bc","Type":"ContainerStarted","Data":"e83f3a2231d4e8b900f553f14b20706a468a8bbf08b33c6e42175c37e907d02b"} Jan 30 06:47:10 crc kubenswrapper[4931]: I0130 06:47:10.311274 4931 generic.go:334] "Generic (PLEG): container finished" podID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerID="a25c57f5fdcd778577fea8d3560ffb6508c8c6a43e5aa9b1c708756b6a5b4cda" exitCode=0 Jan 30 06:47:10 crc kubenswrapper[4931]: I0130 06:47:10.311325 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" event={"ID":"8db6c802-44ea-48b4-a63f-c6c43492e6bc","Type":"ContainerDied","Data":"a25c57f5fdcd778577fea8d3560ffb6508c8c6a43e5aa9b1c708756b6a5b4cda"} Jan 30 06:47:12 crc kubenswrapper[4931]: I0130 06:47:12.340052 4931 generic.go:334] "Generic (PLEG): container finished" podID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerID="3d9512a36b8f2d2bbb2d2a714b4fd87023b7b20bbdb765b5393aa21e003e11fe" exitCode=0 Jan 30 06:47:12 crc kubenswrapper[4931]: I0130 06:47:12.340166 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" event={"ID":"8db6c802-44ea-48b4-a63f-c6c43492e6bc","Type":"ContainerDied","Data":"3d9512a36b8f2d2bbb2d2a714b4fd87023b7b20bbdb765b5393aa21e003e11fe"} Jan 30 06:47:13 crc kubenswrapper[4931]: I0130 06:47:13.353189 4931 generic.go:334] "Generic (PLEG): container finished" podID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerID="768e5dca9c8f197b42c027297409e1b69f038de6d463f6b9a171e4d80977f474" exitCode=0 Jan 30 06:47:13 crc kubenswrapper[4931]: I0130 06:47:13.353298 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" event={"ID":"8db6c802-44ea-48b4-a63f-c6c43492e6bc","Type":"ContainerDied","Data":"768e5dca9c8f197b42c027297409e1b69f038de6d463f6b9a171e4d80977f474"} Jan 30 06:47:14 crc kubenswrapper[4931]: I0130 06:47:14.066530 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9f04-account-create-update-wgg9g"] Jan 30 06:47:14 crc kubenswrapper[4931]: I0130 06:47:14.097053 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9f04-account-create-update-wgg9g"] Jan 30 06:47:14 crc kubenswrapper[4931]: I0130 06:47:14.112916 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zkc49"] Jan 30 06:47:14 crc kubenswrapper[4931]: I0130 06:47:14.122904 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zkc49"] Jan 30 06:47:14 crc kubenswrapper[4931]: I0130 06:47:14.812185 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.000270 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmcjq\" (UniqueName: \"kubernetes.io/projected/8db6c802-44ea-48b4-a63f-c6c43492e6bc-kube-api-access-lmcjq\") pod \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.000465 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-bundle\") pod \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.000569 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-util\") pod \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\" (UID: \"8db6c802-44ea-48b4-a63f-c6c43492e6bc\") " Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.002542 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-bundle" (OuterVolumeSpecName: "bundle") pod "8db6c802-44ea-48b4-a63f-c6c43492e6bc" (UID: "8db6c802-44ea-48b4-a63f-c6c43492e6bc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.013586 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db6c802-44ea-48b4-a63f-c6c43492e6bc-kube-api-access-lmcjq" (OuterVolumeSpecName: "kube-api-access-lmcjq") pod "8db6c802-44ea-48b4-a63f-c6c43492e6bc" (UID: "8db6c802-44ea-48b4-a63f-c6c43492e6bc"). InnerVolumeSpecName "kube-api-access-lmcjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.016248 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-util" (OuterVolumeSpecName: "util") pod "8db6c802-44ea-48b4-a63f-c6c43492e6bc" (UID: "8db6c802-44ea-48b4-a63f-c6c43492e6bc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.102944 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmcjq\" (UniqueName: \"kubernetes.io/projected/8db6c802-44ea-48b4-a63f-c6c43492e6bc-kube-api-access-lmcjq\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.102992 4931 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.103011 4931 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8db6c802-44ea-48b4-a63f-c6c43492e6bc-util\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.378449 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" event={"ID":"8db6c802-44ea-48b4-a63f-c6c43492e6bc","Type":"ContainerDied","Data":"e83f3a2231d4e8b900f553f14b20706a468a8bbf08b33c6e42175c37e907d02b"} Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.378805 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e83f3a2231d4e8b900f553f14b20706a468a8bbf08b33c6e42175c37e907d02b" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.378529 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.442209 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d676c50-5909-4eeb-a22b-63823761ab17" path="/var/lib/kubelet/pods/7d676c50-5909-4eeb-a22b-63823761ab17/volumes" Jan 30 06:47:15 crc kubenswrapper[4931]: I0130 06:47:15.443220 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80acfb99-2d96-453a-b29a-62f23608dd5f" path="/var/lib/kubelet/pods/80acfb99-2d96-453a-b29a-62f23608dd5f/volumes" Jan 30 06:47:20 crc kubenswrapper[4931]: I0130 06:47:20.044769 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-x9ngk"] Jan 30 06:47:20 crc kubenswrapper[4931]: I0130 06:47:20.055697 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-x9ngk"] Jan 30 06:47:21 crc kubenswrapper[4931]: I0130 06:47:21.432735 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e7d2a9-093c-4495-81ab-99972c72b179" path="/var/lib/kubelet/pods/08e7d2a9-093c-4495-81ab-99972c72b179/volumes" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.073392 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m"] Jan 30 06:47:26 crc kubenswrapper[4931]: E0130 06:47:26.074431 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerName="extract" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.074446 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerName="extract" Jan 30 06:47:26 crc kubenswrapper[4931]: E0130 06:47:26.074471 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerName="pull" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.074478 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerName="pull" Jan 30 06:47:26 crc kubenswrapper[4931]: E0130 06:47:26.074491 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerName="util" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.074498 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerName="util" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.074733 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db6c802-44ea-48b4-a63f-c6c43492e6bc" containerName="extract" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.075473 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.083409 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.083822 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.083962 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-vkqdq" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.111248 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.145061 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.146848 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.152251 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-c5zbp" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.154904 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.165304 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.170673 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.187479 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.200552 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.234502 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/119a1b91-5877-408e-8721-dccac5a05367-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z\" (UID: \"119a1b91-5877-408e-8721-dccac5a05367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.234588 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/119a1b91-5877-408e-8721-dccac5a05367-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z\" (UID: \"119a1b91-5877-408e-8721-dccac5a05367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.234749 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xcf\" (UniqueName: \"kubernetes.io/projected/2668098b-064f-4807-b2ee-7efb5dc89fb8-kube-api-access-m8xcf\") pod \"obo-prometheus-operator-68bc856cb9-lx27m\" (UID: \"2668098b-064f-4807-b2ee-7efb5dc89fb8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.271795 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qm276"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.273039 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.275678 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-pn6vp" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.275796 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.293880 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qm276"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.337052 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xcf\" (UniqueName: \"kubernetes.io/projected/2668098b-064f-4807-b2ee-7efb5dc89fb8-kube-api-access-m8xcf\") pod \"obo-prometheus-operator-68bc856cb9-lx27m\" (UID: \"2668098b-064f-4807-b2ee-7efb5dc89fb8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.337161 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38907dab-62b6-4364-b48c-8300b1fa2ad2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr\" (UID: \"38907dab-62b6-4364-b48c-8300b1fa2ad2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.337220 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/119a1b91-5877-408e-8721-dccac5a05367-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z\" (UID: \"119a1b91-5877-408e-8721-dccac5a05367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.337285 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/119a1b91-5877-408e-8721-dccac5a05367-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z\" (UID: \"119a1b91-5877-408e-8721-dccac5a05367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.337328 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38907dab-62b6-4364-b48c-8300b1fa2ad2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr\" (UID: \"38907dab-62b6-4364-b48c-8300b1fa2ad2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.343732 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/119a1b91-5877-408e-8721-dccac5a05367-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z\" (UID: \"119a1b91-5877-408e-8721-dccac5a05367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.359133 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/119a1b91-5877-408e-8721-dccac5a05367-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z\" (UID: \"119a1b91-5877-408e-8721-dccac5a05367\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.360009 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xcf\" (UniqueName: \"kubernetes.io/projected/2668098b-064f-4807-b2ee-7efb5dc89fb8-kube-api-access-m8xcf\") pod \"obo-prometheus-operator-68bc856cb9-lx27m\" (UID: \"2668098b-064f-4807-b2ee-7efb5dc89fb8\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.388394 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gw297"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.389817 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.403841 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-vlfpb" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.407405 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.417153 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gw297"] Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.439834 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38907dab-62b6-4364-b48c-8300b1fa2ad2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr\" (UID: \"38907dab-62b6-4364-b48c-8300b1fa2ad2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.439980 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d5zv\" (UniqueName: \"kubernetes.io/projected/c63c0b3f-7290-4318-8db6-a1ae150b22e0-kube-api-access-4d5zv\") pod \"observability-operator-59bdc8b94-qm276\" (UID: \"c63c0b3f-7290-4318-8db6-a1ae150b22e0\") " pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.440031 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c63c0b3f-7290-4318-8db6-a1ae150b22e0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qm276\" (UID: \"c63c0b3f-7290-4318-8db6-a1ae150b22e0\") " pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.440160 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38907dab-62b6-4364-b48c-8300b1fa2ad2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr\" (UID: \"38907dab-62b6-4364-b48c-8300b1fa2ad2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.443032 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38907dab-62b6-4364-b48c-8300b1fa2ad2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr\" (UID: \"38907dab-62b6-4364-b48c-8300b1fa2ad2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.446524 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38907dab-62b6-4364-b48c-8300b1fa2ad2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr\" (UID: \"38907dab-62b6-4364-b48c-8300b1fa2ad2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.467152 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.489095 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.543163 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/55072f8e-c1ef-45fd-9ec3-43e74afed3a7-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gw297\" (UID: \"55072f8e-c1ef-45fd-9ec3-43e74afed3a7\") " pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.543359 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d5zv\" (UniqueName: \"kubernetes.io/projected/c63c0b3f-7290-4318-8db6-a1ae150b22e0-kube-api-access-4d5zv\") pod \"observability-operator-59bdc8b94-qm276\" (UID: \"c63c0b3f-7290-4318-8db6-a1ae150b22e0\") " pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.543473 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c63c0b3f-7290-4318-8db6-a1ae150b22e0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qm276\" (UID: \"c63c0b3f-7290-4318-8db6-a1ae150b22e0\") " pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.543727 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck4jr\" (UniqueName: \"kubernetes.io/projected/55072f8e-c1ef-45fd-9ec3-43e74afed3a7-kube-api-access-ck4jr\") pod \"perses-operator-5bf474d74f-gw297\" (UID: \"55072f8e-c1ef-45fd-9ec3-43e74afed3a7\") " pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.572890 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c63c0b3f-7290-4318-8db6-a1ae150b22e0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-qm276\" (UID: \"c63c0b3f-7290-4318-8db6-a1ae150b22e0\") " pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.584048 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d5zv\" (UniqueName: \"kubernetes.io/projected/c63c0b3f-7290-4318-8db6-a1ae150b22e0-kube-api-access-4d5zv\") pod \"observability-operator-59bdc8b94-qm276\" (UID: \"c63c0b3f-7290-4318-8db6-a1ae150b22e0\") " pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.595444 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.654921 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck4jr\" (UniqueName: \"kubernetes.io/projected/55072f8e-c1ef-45fd-9ec3-43e74afed3a7-kube-api-access-ck4jr\") pod \"perses-operator-5bf474d74f-gw297\" (UID: \"55072f8e-c1ef-45fd-9ec3-43e74afed3a7\") " pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.655082 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/55072f8e-c1ef-45fd-9ec3-43e74afed3a7-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gw297\" (UID: \"55072f8e-c1ef-45fd-9ec3-43e74afed3a7\") " pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.656006 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/55072f8e-c1ef-45fd-9ec3-43e74afed3a7-openshift-service-ca\") pod \"perses-operator-5bf474d74f-gw297\" (UID: \"55072f8e-c1ef-45fd-9ec3-43e74afed3a7\") " pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.719473 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck4jr\" (UniqueName: \"kubernetes.io/projected/55072f8e-c1ef-45fd-9ec3-43e74afed3a7-kube-api-access-ck4jr\") pod \"perses-operator-5bf474d74f-gw297\" (UID: \"55072f8e-c1ef-45fd-9ec3-43e74afed3a7\") " pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:26 crc kubenswrapper[4931]: I0130 06:47:26.874557 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.084756 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m"] Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.097219 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z"] Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.217320 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr"] Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.341764 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-qm276"] Jan 30 06:47:27 crc kubenswrapper[4931]: W0130 06:47:27.346854 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc63c0b3f_7290_4318_8db6_a1ae150b22e0.slice/crio-1e1bd0d954b66af59ae51108612c7e3e66ee714efeac262528918ec8e68c0294 WatchSource:0}: Error finding container 1e1bd0d954b66af59ae51108612c7e3e66ee714efeac262528918ec8e68c0294: Status 404 returned error can't find the container with id 1e1bd0d954b66af59ae51108612c7e3e66ee714efeac262528918ec8e68c0294 Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.455079 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-gw297"] Jan 30 06:47:27 crc kubenswrapper[4931]: W0130 06:47:27.464354 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55072f8e_c1ef_45fd_9ec3_43e74afed3a7.slice/crio-cdc09e6144bfe13c994836816be674a2482b6fba1840c0c8d82684afbe83855f WatchSource:0}: Error finding container cdc09e6144bfe13c994836816be674a2482b6fba1840c0c8d82684afbe83855f: Status 404 returned error can't find the container with id cdc09e6144bfe13c994836816be674a2482b6fba1840c0c8d82684afbe83855f Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.537008 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-qm276" event={"ID":"c63c0b3f-7290-4318-8db6-a1ae150b22e0","Type":"ContainerStarted","Data":"1e1bd0d954b66af59ae51108612c7e3e66ee714efeac262528918ec8e68c0294"} Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.539208 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" event={"ID":"2668098b-064f-4807-b2ee-7efb5dc89fb8","Type":"ContainerStarted","Data":"337b1d501ed16bb3dddf280a3f7608ee524e7d7ad390612a1f4b5a8428bd92ad"} Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.541171 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" event={"ID":"119a1b91-5877-408e-8721-dccac5a05367","Type":"ContainerStarted","Data":"49697c2c21142e44412f16e27dc59cefe601284a387b0d8f77575a53056342a1"} Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.543756 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gw297" event={"ID":"55072f8e-c1ef-45fd-9ec3-43e74afed3a7","Type":"ContainerStarted","Data":"cdc09e6144bfe13c994836816be674a2482b6fba1840c0c8d82684afbe83855f"} Jan 30 06:47:27 crc kubenswrapper[4931]: I0130 06:47:27.547800 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" event={"ID":"38907dab-62b6-4364-b48c-8300b1fa2ad2","Type":"ContainerStarted","Data":"100d94ead8b427afcfd12524ab4a6e5628fc75d3aeb2e3152310812206695d05"} Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.755196 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" event={"ID":"2668098b-064f-4807-b2ee-7efb5dc89fb8","Type":"ContainerStarted","Data":"b1ae7c48818c75c8f0f058c5a8e32f448f0319b09927c3cbfba7d5a6ec89e174"} Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.762356 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" event={"ID":"119a1b91-5877-408e-8721-dccac5a05367","Type":"ContainerStarted","Data":"f757d946a3ad0e0d7f9bc785018faa8d2249f50a81150e7d76525a29da1c33f5"} Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.764255 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-gw297" event={"ID":"55072f8e-c1ef-45fd-9ec3-43e74afed3a7","Type":"ContainerStarted","Data":"bae6c02ffb2792738f9ce9a06213783c8b1959ada4ab57320eafed7bb96a04bb"} Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.764353 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.766104 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" event={"ID":"38907dab-62b6-4364-b48c-8300b1fa2ad2","Type":"ContainerStarted","Data":"92b828684851fdf725b5b5116d1adf9cac3d5fddcbc4aca79eaeb832d1c93b8f"} Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.767779 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-qm276" event={"ID":"c63c0b3f-7290-4318-8db6-a1ae150b22e0","Type":"ContainerStarted","Data":"7e8a19a93897c4d07e99f2031c8047a8307c95f13efc0ccfc7740dc247da194a"} Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.768264 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.769978 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-qm276" Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.782111 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lx27m" podStartSLOduration=2.527822385 podStartE2EDuration="14.782094323s" podCreationTimestamp="2026-01-30 06:47:26 +0000 UTC" firstStartedPulling="2026-01-30 06:47:27.087678535 +0000 UTC m=+5982.457588792" lastFinishedPulling="2026-01-30 06:47:39.341950473 +0000 UTC m=+5994.711860730" observedRunningTime="2026-01-30 06:47:40.778405479 +0000 UTC m=+5996.148315736" watchObservedRunningTime="2026-01-30 06:47:40.782094323 +0000 UTC m=+5996.152004580" Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.804712 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr" podStartSLOduration=2.702875903 podStartE2EDuration="14.804694719s" podCreationTimestamp="2026-01-30 06:47:26 +0000 UTC" firstStartedPulling="2026-01-30 06:47:27.224633021 +0000 UTC m=+5982.594543268" lastFinishedPulling="2026-01-30 06:47:39.326451817 +0000 UTC m=+5994.696362084" observedRunningTime="2026-01-30 06:47:40.795932312 +0000 UTC m=+5996.165842569" watchObservedRunningTime="2026-01-30 06:47:40.804694719 +0000 UTC m=+5996.174604976" Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.856155 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-gw297" podStartSLOduration=2.979634913 podStartE2EDuration="14.856139677s" podCreationTimestamp="2026-01-30 06:47:26 +0000 UTC" firstStartedPulling="2026-01-30 06:47:27.467859097 +0000 UTC m=+5982.837769364" lastFinishedPulling="2026-01-30 06:47:39.344363861 +0000 UTC m=+5994.714274128" observedRunningTime="2026-01-30 06:47:40.81930385 +0000 UTC m=+5996.189214107" watchObservedRunningTime="2026-01-30 06:47:40.856139677 +0000 UTC m=+5996.226049934" Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.864036 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-qm276" podStartSLOduration=2.807341913 podStartE2EDuration="14.864020609s" podCreationTimestamp="2026-01-30 06:47:26 +0000 UTC" firstStartedPulling="2026-01-30 06:47:27.348885358 +0000 UTC m=+5982.718795615" lastFinishedPulling="2026-01-30 06:47:39.405564054 +0000 UTC m=+5994.775474311" observedRunningTime="2026-01-30 06:47:40.859220384 +0000 UTC m=+5996.229130641" watchObservedRunningTime="2026-01-30 06:47:40.864020609 +0000 UTC m=+5996.233930866" Jan 30 06:47:40 crc kubenswrapper[4931]: I0130 06:47:40.884619 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z" podStartSLOduration=2.648008158 podStartE2EDuration="14.884605298s" podCreationTimestamp="2026-01-30 06:47:26 +0000 UTC" firstStartedPulling="2026-01-30 06:47:27.102966216 +0000 UTC m=+5982.472876473" lastFinishedPulling="2026-01-30 06:47:39.339563306 +0000 UTC m=+5994.709473613" observedRunningTime="2026-01-30 06:47:40.880720809 +0000 UTC m=+5996.250631066" watchObservedRunningTime="2026-01-30 06:47:40.884605298 +0000 UTC m=+5996.254515555" Jan 30 06:47:46 crc kubenswrapper[4931]: I0130 06:47:46.878957 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-gw297" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.403540 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.404104 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="459f1ff6-e3cb-45a8-9a4a-0e24e7881407" containerName="openstackclient" containerID="cri-o://73102981ee7cbbfca21cfe57a2e7dc11e7bcd50d611864190f31f741ca04bd1e" gracePeriod=2 Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.416447 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.463294 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: E0130 06:47:49.463773 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459f1ff6-e3cb-45a8-9a4a-0e24e7881407" containerName="openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.463791 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="459f1ff6-e3cb-45a8-9a4a-0e24e7881407" containerName="openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.464033 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="459f1ff6-e3cb-45a8-9a4a-0e24e7881407" containerName="openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.464726 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.486137 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.548509 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.548667 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.548715 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75swp\" (UniqueName: \"kubernetes.io/projected/c06bf028-7b95-418f-9285-80094ac02aa7-kube-api-access-75swp\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.575052 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: E0130 06:47:49.576721 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-75swp openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="c06bf028-7b95-418f-9285-80094ac02aa7" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.635660 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.664034 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.664124 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75swp\" (UniqueName: \"kubernetes.io/projected/c06bf028-7b95-418f-9285-80094ac02aa7-kube-api-access-75swp\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.664336 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.669771 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: E0130 06:47:49.670241 4931 projected.go:194] Error preparing data for projected volume kube-api-access-75swp for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (c06bf028-7b95-418f-9285-80094ac02aa7) does not match the UID in record. The object might have been deleted and then recreated Jan 30 06:47:49 crc kubenswrapper[4931]: E0130 06:47:49.670378 4931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06bf028-7b95-418f-9285-80094ac02aa7-kube-api-access-75swp podName:c06bf028-7b95-418f-9285-80094ac02aa7 nodeName:}" failed. No retries permitted until 2026-01-30 06:47:50.17035598 +0000 UTC m=+6005.540266237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-75swp" (UniqueName: "kubernetes.io/projected/c06bf028-7b95-418f-9285-80094ac02aa7-kube-api-access-75swp") pod "openstackclient" (UID: "c06bf028-7b95-418f-9285-80094ac02aa7") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (c06bf028-7b95-418f-9285-80094ac02aa7) does not match the UID in record. The object might have been deleted and then recreated Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.680208 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.681882 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.716549 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.730066 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config-secret\") pod \"openstackclient\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.769259 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9261635-8331-44ae-88d1-df73db930d2d-openstack-config\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.769318 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59l2t\" (UniqueName: \"kubernetes.io/projected/a9261635-8331-44ae-88d1-df73db930d2d-kube-api-access-59l2t\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.769367 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9261635-8331-44ae-88d1-df73db930d2d-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.813176 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.814882 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.819798 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-lhdr6" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.847254 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.870579 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59l2t\" (UniqueName: \"kubernetes.io/projected/a9261635-8331-44ae-88d1-df73db930d2d-kube-api-access-59l2t\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.870665 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9261635-8331-44ae-88d1-df73db930d2d-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.870804 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fx5b\" (UniqueName: \"kubernetes.io/projected/eb1cdd0a-4520-49ce-8bc6-686dba45e7e8-kube-api-access-8fx5b\") pod \"kube-state-metrics-0\" (UID: \"eb1cdd0a-4520-49ce-8bc6-686dba45e7e8\") " pod="openstack/kube-state-metrics-0" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.870832 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9261635-8331-44ae-88d1-df73db930d2d-openstack-config\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.871607 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9261635-8331-44ae-88d1-df73db930d2d-openstack-config\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.878010 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9261635-8331-44ae-88d1-df73db930d2d-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.901282 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59l2t\" (UniqueName: \"kubernetes.io/projected/a9261635-8331-44ae-88d1-df73db930d2d-kube-api-access-59l2t\") pod \"openstackclient\" (UID: \"a9261635-8331-44ae-88d1-df73db930d2d\") " pod="openstack/openstackclient" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.972547 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fx5b\" (UniqueName: \"kubernetes.io/projected/eb1cdd0a-4520-49ce-8bc6-686dba45e7e8-kube-api-access-8fx5b\") pod \"kube-state-metrics-0\" (UID: \"eb1cdd0a-4520-49ce-8bc6-686dba45e7e8\") " pod="openstack/kube-state-metrics-0" Jan 30 06:47:49 crc kubenswrapper[4931]: I0130 06:47:49.992331 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fx5b\" (UniqueName: \"kubernetes.io/projected/eb1cdd0a-4520-49ce-8bc6-686dba45e7e8-kube-api-access-8fx5b\") pod \"kube-state-metrics-0\" (UID: \"eb1cdd0a-4520-49ce-8bc6-686dba45e7e8\") " pod="openstack/kube-state-metrics-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.094692 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.106614 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.107128 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c06bf028-7b95-418f-9285-80094ac02aa7" podUID="a9261635-8331-44ae-88d1-df73db930d2d" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.109336 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.147322 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.179077 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config-secret\") pod \"c06bf028-7b95-418f-9285-80094ac02aa7\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.179299 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config\") pod \"c06bf028-7b95-418f-9285-80094ac02aa7\" (UID: \"c06bf028-7b95-418f-9285-80094ac02aa7\") " Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.179796 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75swp\" (UniqueName: \"kubernetes.io/projected/c06bf028-7b95-418f-9285-80094ac02aa7-kube-api-access-75swp\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.180321 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c06bf028-7b95-418f-9285-80094ac02aa7" (UID: "c06bf028-7b95-418f-9285-80094ac02aa7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.209195 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c06bf028-7b95-418f-9285-80094ac02aa7" (UID: "c06bf028-7b95-418f-9285-80094ac02aa7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.281115 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.281149 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c06bf028-7b95-418f-9285-80094ac02aa7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.547907 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.551168 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.555033 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.555301 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.563035 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.563235 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.563526 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-w2s75" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.580363 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.601953 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfc9f\" (UniqueName: \"kubernetes.io/projected/faef005b-c58c-4b22-944c-defd3471fa32-kube-api-access-hfc9f\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.601985 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.602048 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/faef005b-c58c-4b22-944c-defd3471fa32-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.602069 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/faef005b-c58c-4b22-944c-defd3471fa32-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.602101 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/faef005b-c58c-4b22-944c-defd3471fa32-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.602122 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.602216 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.703496 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/faef005b-c58c-4b22-944c-defd3471fa32-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.703817 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.703985 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.704070 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfc9f\" (UniqueName: \"kubernetes.io/projected/faef005b-c58c-4b22-944c-defd3471fa32-kube-api-access-hfc9f\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.704146 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.704244 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/faef005b-c58c-4b22-944c-defd3471fa32-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.704322 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/faef005b-c58c-4b22-944c-defd3471fa32-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.704847 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/faef005b-c58c-4b22-944c-defd3471fa32-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.717021 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.717707 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.718412 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/faef005b-c58c-4b22-944c-defd3471fa32-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.728000 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/faef005b-c58c-4b22-944c-defd3471fa32-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.740051 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/faef005b-c58c-4b22-944c-defd3471fa32-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.748795 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfc9f\" (UniqueName: \"kubernetes.io/projected/faef005b-c58c-4b22-944c-defd3471fa32-kube-api-access-hfc9f\") pod \"alertmanager-metric-storage-0\" (UID: \"faef005b-c58c-4b22-944c-defd3471fa32\") " pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.919398 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 30 06:47:50 crc kubenswrapper[4931]: I0130 06:47:50.920542 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.028920 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.031629 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.040794 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.041060 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bx8cf" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.041206 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.041842 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.041875 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.042063 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.042089 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.042239 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.072488 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.129467 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.139557 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb1cdd0a-4520-49ce-8bc6-686dba45e7e8","Type":"ContainerStarted","Data":"2d8f55e0331f11036c5786ac23d7de2c089dbeb8f93b1579ad7aaa1b700aa4b3"} Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.176348 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.179606 4931 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c06bf028-7b95-418f-9285-80094ac02aa7" podUID="a9261635-8331-44ae-88d1-df73db930d2d" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228569 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228623 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-config\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228648 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228669 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228698 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228751 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/201626a3-bc04-48ab-859c-5a7ffe97670e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228816 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/201626a3-bc04-48ab-859c-5a7ffe97670e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228865 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npwx6\" (UniqueName: \"kubernetes.io/projected/201626a3-bc04-48ab-859c-5a7ffe97670e-kube-api-access-npwx6\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228886 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.228909 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.332692 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/201626a3-bc04-48ab-859c-5a7ffe97670e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.332986 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npwx6\" (UniqueName: \"kubernetes.io/projected/201626a3-bc04-48ab-859c-5a7ffe97670e-kube-api-access-npwx6\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333008 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333032 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333090 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333113 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-config\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333132 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333166 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333189 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.333222 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/201626a3-bc04-48ab-859c-5a7ffe97670e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.339236 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.339722 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.340382 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/201626a3-bc04-48ab-859c-5a7ffe97670e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.344788 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/201626a3-bc04-48ab-859c-5a7ffe97670e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.346038 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/201626a3-bc04-48ab-859c-5a7ffe97670e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.348905 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-config\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.358151 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.363401 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npwx6\" (UniqueName: \"kubernetes.io/projected/201626a3-bc04-48ab-859c-5a7ffe97670e-kube-api-access-npwx6\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.364217 4931 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.364271 4931 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea2045aa96daddbc52ed0a3ac5da2f96eb3f4f7be6270f0e4a84add5a5ee748a/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.386996 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/201626a3-bc04-48ab-859c-5a7ffe97670e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.451810 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06bf028-7b95-418f-9285-80094ac02aa7" path="/var/lib/kubelet/pods/c06bf028-7b95-418f-9285-80094ac02aa7/volumes" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.668871 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.776902 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d50d636-9fa9-4d96-8a8b-86354c2ddcd2\") pod \"prometheus-metric-storage-0\" (UID: \"201626a3-bc04-48ab-859c-5a7ffe97670e\") " pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:51 crc kubenswrapper[4931]: I0130 06:47:51.992086 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 30 06:47:52 crc kubenswrapper[4931]: I0130 06:47:52.138968 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faef005b-c58c-4b22-944c-defd3471fa32","Type":"ContainerStarted","Data":"09c97a7f80d27798d89cf46d3d17d28ad1db0bdb1357a4f8c7152d9252762f4c"} Jan 30 06:47:52 crc kubenswrapper[4931]: I0130 06:47:52.140331 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a9261635-8331-44ae-88d1-df73db930d2d","Type":"ContainerStarted","Data":"60cf8cbef84de7d548f52576f9f6189fcb562cc1bc3caf2f4e4227b7b4a94ca0"} Jan 30 06:47:52 crc kubenswrapper[4931]: I0130 06:47:52.143190 4931 generic.go:334] "Generic (PLEG): container finished" podID="459f1ff6-e3cb-45a8-9a4a-0e24e7881407" containerID="73102981ee7cbbfca21cfe57a2e7dc11e7bcd50d611864190f31f741ca04bd1e" exitCode=137 Jan 30 06:47:52 crc kubenswrapper[4931]: I0130 06:47:52.534695 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.155391 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a9261635-8331-44ae-88d1-df73db930d2d","Type":"ContainerStarted","Data":"7ef25917a5433af6332663c4a2a47d9d78ee326692e15534fb17e746d3f90e43"} Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.158120 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"201626a3-bc04-48ab-859c-5a7ffe97670e","Type":"ContainerStarted","Data":"ad1131374e01305446fc3c82a63690da5edabbd594006bde2373428cfa38f313"} Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.171534 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.171514745 podStartE2EDuration="4.171514745s" podCreationTimestamp="2026-01-30 06:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:47:53.171199386 +0000 UTC m=+6008.541109643" watchObservedRunningTime="2026-01-30 06:47:53.171514745 +0000 UTC m=+6008.541425052" Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.267967 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.379465 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config\") pod \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.379572 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config-secret\") pod \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.379706 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c59zl\" (UniqueName: \"kubernetes.io/projected/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-kube-api-access-c59zl\") pod \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\" (UID: \"459f1ff6-e3cb-45a8-9a4a-0e24e7881407\") " Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.389887 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-kube-api-access-c59zl" (OuterVolumeSpecName: "kube-api-access-c59zl") pod "459f1ff6-e3cb-45a8-9a4a-0e24e7881407" (UID: "459f1ff6-e3cb-45a8-9a4a-0e24e7881407"). InnerVolumeSpecName "kube-api-access-c59zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.436194 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "459f1ff6-e3cb-45a8-9a4a-0e24e7881407" (UID: "459f1ff6-e3cb-45a8-9a4a-0e24e7881407"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.461224 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "459f1ff6-e3cb-45a8-9a4a-0e24e7881407" (UID: "459f1ff6-e3cb-45a8-9a4a-0e24e7881407"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.482661 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.482703 4931 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:53 crc kubenswrapper[4931]: I0130 06:47:53.482713 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c59zl\" (UniqueName: \"kubernetes.io/projected/459f1ff6-e3cb-45a8-9a4a-0e24e7881407-kube-api-access-c59zl\") on node \"crc\" DevicePath \"\"" Jan 30 06:47:54 crc kubenswrapper[4931]: I0130 06:47:54.170477 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"eb1cdd0a-4520-49ce-8bc6-686dba45e7e8","Type":"ContainerStarted","Data":"44a177c6e04af3412b97c40f5dabd9fcb08bf20bae8016685d5f9079a80c66ad"} Jan 30 06:47:54 crc kubenswrapper[4931]: I0130 06:47:54.170884 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 06:47:54 crc kubenswrapper[4931]: I0130 06:47:54.172262 4931 scope.go:117] "RemoveContainer" containerID="73102981ee7cbbfca21cfe57a2e7dc11e7bcd50d611864190f31f741ca04bd1e" Jan 30 06:47:54 crc kubenswrapper[4931]: I0130 06:47:54.172468 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 06:47:54 crc kubenswrapper[4931]: I0130 06:47:54.197543 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.801984573 podStartE2EDuration="5.197523786s" podCreationTimestamp="2026-01-30 06:47:49 +0000 UTC" firstStartedPulling="2026-01-30 06:47:50.965535058 +0000 UTC m=+6006.335445315" lastFinishedPulling="2026-01-30 06:47:53.361074251 +0000 UTC m=+6008.730984528" observedRunningTime="2026-01-30 06:47:54.187739671 +0000 UTC m=+6009.557649918" watchObservedRunningTime="2026-01-30 06:47:54.197523786 +0000 UTC m=+6009.567434043" Jan 30 06:47:55 crc kubenswrapper[4931]: I0130 06:47:55.438065 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="459f1ff6-e3cb-45a8-9a4a-0e24e7881407" path="/var/lib/kubelet/pods/459f1ff6-e3cb-45a8-9a4a-0e24e7881407/volumes" Jan 30 06:48:00 crc kubenswrapper[4931]: I0130 06:48:00.152204 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 06:48:01 crc kubenswrapper[4931]: I0130 06:48:01.506901 4931 scope.go:117] "RemoveContainer" containerID="d17e4d5da3cbd98de6ce9452e4b21d30ccf3ad5f026d8b30b101024dc6fd4576" Jan 30 06:48:01 crc kubenswrapper[4931]: I0130 06:48:01.596304 4931 scope.go:117] "RemoveContainer" containerID="312f2f7be76e7df2d1974a8fc3d9bcd846d76d9bf2e6bb52018a7e69743078de" Jan 30 06:48:01 crc kubenswrapper[4931]: I0130 06:48:01.636732 4931 scope.go:117] "RemoveContainer" containerID="a9a1121e8223cb02db35e17fa42991de13dcf37e427dbdd0bf16fa8038651093" Jan 30 06:48:06 crc kubenswrapper[4931]: I0130 06:48:06.353751 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"201626a3-bc04-48ab-859c-5a7ffe97670e","Type":"ContainerStarted","Data":"c2eba11cac41762b5cd6c8068a20314519ca7b74bad5ea62a0adbde79dc90df4"} Jan 30 06:48:06 crc kubenswrapper[4931]: I0130 06:48:06.358513 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faef005b-c58c-4b22-944c-defd3471fa32","Type":"ContainerStarted","Data":"376225131c6a847f9df2d3be6e0526d8a705d0a9a91ec6c6c672058ed88bcafb"} Jan 30 06:48:15 crc kubenswrapper[4931]: I0130 06:48:15.482263 4931 generic.go:334] "Generic (PLEG): container finished" podID="faef005b-c58c-4b22-944c-defd3471fa32" containerID="376225131c6a847f9df2d3be6e0526d8a705d0a9a91ec6c6c672058ed88bcafb" exitCode=0 Jan 30 06:48:15 crc kubenswrapper[4931]: I0130 06:48:15.482385 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faef005b-c58c-4b22-944c-defd3471fa32","Type":"ContainerDied","Data":"376225131c6a847f9df2d3be6e0526d8a705d0a9a91ec6c6c672058ed88bcafb"} Jan 30 06:48:15 crc kubenswrapper[4931]: I0130 06:48:15.485145 4931 generic.go:334] "Generic (PLEG): container finished" podID="201626a3-bc04-48ab-859c-5a7ffe97670e" containerID="c2eba11cac41762b5cd6c8068a20314519ca7b74bad5ea62a0adbde79dc90df4" exitCode=0 Jan 30 06:48:15 crc kubenswrapper[4931]: I0130 06:48:15.485188 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"201626a3-bc04-48ab-859c-5a7ffe97670e","Type":"ContainerDied","Data":"c2eba11cac41762b5cd6c8068a20314519ca7b74bad5ea62a0adbde79dc90df4"} Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.037775 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vm2gb"] Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.054040 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7dkqh"] Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.066158 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5xpsl"] Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.077655 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vm2gb"] Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.090007 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7dkqh"] Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.100054 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5xpsl"] Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.450594 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2ac10a-2179-4d51-b7e8-31ac3621d798" path="/var/lib/kubelet/pods/0d2ac10a-2179-4d51-b7e8-31ac3621d798/volumes" Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.451414 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d595bdd-ffa6-4292-b4c2-1eba0736a6a4" path="/var/lib/kubelet/pods/5d595bdd-ffa6-4292-b4c2-1eba0736a6a4/volumes" Jan 30 06:48:17 crc kubenswrapper[4931]: I0130 06:48:17.464086 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede2117e-e3d5-46f6-8a54-1cd987370470" path="/var/lib/kubelet/pods/ede2117e-e3d5-46f6-8a54-1cd987370470/volumes" Jan 30 06:48:18 crc kubenswrapper[4931]: I0130 06:48:18.044202 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c3ab-account-create-update-6wqgk"] Jan 30 06:48:18 crc kubenswrapper[4931]: I0130 06:48:18.059790 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9932-account-create-update-6qlx2"] Jan 30 06:48:18 crc kubenswrapper[4931]: I0130 06:48:18.071562 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-184a-account-create-update-b6t5s"] Jan 30 06:48:18 crc kubenswrapper[4931]: I0130 06:48:18.081574 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c3ab-account-create-update-6wqgk"] Jan 30 06:48:18 crc kubenswrapper[4931]: I0130 06:48:18.090838 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9932-account-create-update-6qlx2"] Jan 30 06:48:18 crc kubenswrapper[4931]: I0130 06:48:18.100171 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-184a-account-create-update-b6t5s"] Jan 30 06:48:19 crc kubenswrapper[4931]: I0130 06:48:19.528523 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2a0233-04c5-4382-948d-809c1216b075" path="/var/lib/kubelet/pods/0c2a0233-04c5-4382-948d-809c1216b075/volumes" Jan 30 06:48:19 crc kubenswrapper[4931]: I0130 06:48:19.594632 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730d8243-e8f1-4b7a-b012-d65ff132d427" path="/var/lib/kubelet/pods/730d8243-e8f1-4b7a-b012-d65ff132d427/volumes" Jan 30 06:48:19 crc kubenswrapper[4931]: I0130 06:48:19.595208 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a06db3-c381-45ef-883d-ee7393822e5a" path="/var/lib/kubelet/pods/c7a06db3-c381-45ef-883d-ee7393822e5a/volumes" Jan 30 06:48:27 crc kubenswrapper[4931]: I0130 06:48:27.039765 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thknd"] Jan 30 06:48:27 crc kubenswrapper[4931]: I0130 06:48:27.052269 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-thknd"] Jan 30 06:48:27 crc kubenswrapper[4931]: I0130 06:48:27.434097 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="693a2e91-1503-4caa-a71d-4f65d99a913c" path="/var/lib/kubelet/pods/693a2e91-1503-4caa-a71d-4f65d99a913c/volumes" Jan 30 06:48:29 crc kubenswrapper[4931]: E0130 06:48:29.970168 4931 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Jan 30 06:48:29 crc kubenswrapper[4931]: E0130 06:48:29.970858 4931 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npwx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(201626a3-bc04-48ab-859c-5a7ffe97670e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 06:48:30 crc kubenswrapper[4931]: I0130 06:48:30.670467 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faef005b-c58c-4b22-944c-defd3471fa32","Type":"ContainerStarted","Data":"f1be7091ee17b166609f29d4cc5a133c686f77e6b89bdd157872b6f5dbddc0d1"} Jan 30 06:48:34 crc kubenswrapper[4931]: I0130 06:48:34.733653 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"201626a3-bc04-48ab-859c-5a7ffe97670e","Type":"ContainerStarted","Data":"0a255d21e6258b4c05baeadd78ecbe9e74a01311366f48578347d846fe068bb1"} Jan 30 06:48:35 crc kubenswrapper[4931]: I0130 06:48:35.750391 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"faef005b-c58c-4b22-944c-defd3471fa32","Type":"ContainerStarted","Data":"45728b94f180ab44ef641bed1d4b55e655ae789f5c8e494f0c900fdbd87751b5"} Jan 30 06:48:35 crc kubenswrapper[4931]: I0130 06:48:35.751519 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 30 06:48:35 crc kubenswrapper[4931]: I0130 06:48:35.755013 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 30 06:48:35 crc kubenswrapper[4931]: I0130 06:48:35.798043 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.709086714 podStartE2EDuration="45.798022089s" podCreationTimestamp="2026-01-30 06:47:50 +0000 UTC" firstStartedPulling="2026-01-30 06:47:51.836447664 +0000 UTC m=+6007.206357921" lastFinishedPulling="2026-01-30 06:48:29.925382999 +0000 UTC m=+6045.295293296" observedRunningTime="2026-01-30 06:48:35.7866863 +0000 UTC m=+6051.156596587" watchObservedRunningTime="2026-01-30 06:48:35.798022089 +0000 UTC m=+6051.167932356" Jan 30 06:48:37 crc kubenswrapper[4931]: E0130 06:48:37.574736 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="201626a3-bc04-48ab-859c-5a7ffe97670e" Jan 30 06:48:37 crc kubenswrapper[4931]: I0130 06:48:37.771107 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"201626a3-bc04-48ab-859c-5a7ffe97670e","Type":"ContainerStarted","Data":"d2617c77f0fdebe0e80d581b32539d895e8c20afcf9a30ae8785370e94df527f"} Jan 30 06:48:37 crc kubenswrapper[4931]: E0130 06:48:37.772821 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="201626a3-bc04-48ab-859c-5a7ffe97670e" Jan 30 06:48:38 crc kubenswrapper[4931]: E0130 06:48:38.780411 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="201626a3-bc04-48ab-859c-5a7ffe97670e" Jan 30 06:48:45 crc kubenswrapper[4931]: I0130 06:48:45.056879 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xg8js"] Jan 30 06:48:45 crc kubenswrapper[4931]: I0130 06:48:45.069473 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xg8js"] Jan 30 06:48:45 crc kubenswrapper[4931]: I0130 06:48:45.441092 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ed4dbf-5bb0-45b9-bc15-763a93ba7375" path="/var/lib/kubelet/pods/97ed4dbf-5bb0-45b9-bc15-763a93ba7375/volumes" Jan 30 06:48:46 crc kubenswrapper[4931]: I0130 06:48:46.036709 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-49hcs"] Jan 30 06:48:46 crc kubenswrapper[4931]: I0130 06:48:46.054594 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-49hcs"] Jan 30 06:48:47 crc kubenswrapper[4931]: I0130 06:48:47.444488 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f484f87-1747-491b-a6c5-dd1d51ff66af" path="/var/lib/kubelet/pods/3f484f87-1747-491b-a6c5-dd1d51ff66af/volumes" Jan 30 06:48:52 crc kubenswrapper[4931]: I0130 06:48:52.924700 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"201626a3-bc04-48ab-859c-5a7ffe97670e","Type":"ContainerStarted","Data":"8ec51fb652154b51616a8a659fc911eabe3266bab8b1482a87fa750159b29d69"} Jan 30 06:48:52 crc kubenswrapper[4931]: I0130 06:48:52.949090 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.252430203 podStartE2EDuration="1m3.949071639s" podCreationTimestamp="2026-01-30 06:47:49 +0000 UTC" firstStartedPulling="2026-01-30 06:47:52.54963185 +0000 UTC m=+6007.919542117" lastFinishedPulling="2026-01-30 06:48:52.246273296 +0000 UTC m=+6067.616183553" observedRunningTime="2026-01-30 06:48:52.94627025 +0000 UTC m=+6068.316180507" watchObservedRunningTime="2026-01-30 06:48:52.949071639 +0000 UTC m=+6068.318981896" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.237141 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.240473 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.243663 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.244008 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.285303 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.353844 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-config-data\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.353947 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-scripts\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.353983 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.354086 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-log-httpd\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.354132 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-run-httpd\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.354159 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95ck\" (UniqueName: \"kubernetes.io/projected/1ace174a-c316-432c-82da-840f5e2283d1-kube-api-access-j95ck\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.354175 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.455956 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.456448 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-log-httpd\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.456640 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-run-httpd\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.456687 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95ck\" (UniqueName: \"kubernetes.io/projected/1ace174a-c316-432c-82da-840f5e2283d1-kube-api-access-j95ck\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.456727 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.456755 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-config-data\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.456851 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-scripts\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.457136 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-log-httpd\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.458229 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-run-httpd\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.463762 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-scripts\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.465075 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.466071 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.469578 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-config-data\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.483076 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95ck\" (UniqueName: \"kubernetes.io/projected/1ace174a-c316-432c-82da-840f5e2283d1-kube-api-access-j95ck\") pod \"ceilometer-0\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.559909 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:48:56 crc kubenswrapper[4931]: I0130 06:48:56.993941 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 30 06:48:57 crc kubenswrapper[4931]: I0130 06:48:57.099369 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:48:57 crc kubenswrapper[4931]: W0130 06:48:57.100551 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ace174a_c316_432c_82da_840f5e2283d1.slice/crio-8f0ee7d141f630c7e73df9127c0fee9f75d55fe3a9ad54a729725d13272a81b3 WatchSource:0}: Error finding container 8f0ee7d141f630c7e73df9127c0fee9f75d55fe3a9ad54a729725d13272a81b3: Status 404 returned error can't find the container with id 8f0ee7d141f630c7e73df9127c0fee9f75d55fe3a9ad54a729725d13272a81b3 Jan 30 06:48:57 crc kubenswrapper[4931]: I0130 06:48:57.968303 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerStarted","Data":"aed3b001cce71a5012e1a07b41789b31d0ea72897e75f2bf7e3bbd6b03358b00"} Jan 30 06:48:57 crc kubenswrapper[4931]: I0130 06:48:57.968955 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerStarted","Data":"8f0ee7d141f630c7e73df9127c0fee9f75d55fe3a9ad54a729725d13272a81b3"} Jan 30 06:48:58 crc kubenswrapper[4931]: I0130 06:48:58.980745 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerStarted","Data":"67791474132ea95207d226ce291f362b43314fdb91b8a0365481b75a910cde7a"} Jan 30 06:48:59 crc kubenswrapper[4931]: I0130 06:48:59.990631 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerStarted","Data":"dc00b4b18ec22a13fffbcb9088cbc8844fa86ef6d3929196ba622b907b1b6226"} Jan 30 06:49:01 crc kubenswrapper[4931]: I0130 06:49:01.810834 4931 scope.go:117] "RemoveContainer" containerID="982580309a618618acf59d7ed62dffc9baa63654e107e79e17f31ae5e09b9d10" Jan 30 06:49:01 crc kubenswrapper[4931]: I0130 06:49:01.848920 4931 scope.go:117] "RemoveContainer" containerID="0945601f9dc541b7489d17e996b29ddbb60ba07b0e0dec353dfa850db402078c" Jan 30 06:49:01 crc kubenswrapper[4931]: I0130 06:49:01.918174 4931 scope.go:117] "RemoveContainer" containerID="6d585a871e0bcc02099ca7b0fec64c91cc44765f6bff9850b839ed74e64354fb" Jan 30 06:49:01 crc kubenswrapper[4931]: I0130 06:49:01.958394 4931 scope.go:117] "RemoveContainer" containerID="f317a267d263377c23363a6996fcb39842543912fc8f152fbbaa9e502e107fac" Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.015775 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerStarted","Data":"12aa02774ebae7d466712369ad3821a4d5645f4c4076a555423c271558b4c479"} Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.015878 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.020091 4931 scope.go:117] "RemoveContainer" containerID="5ecad2bc0e017bf1777e069b0d51cce5fdf83ffb212486a161ec83e5ab28a776" Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.044980 4931 scope.go:117] "RemoveContainer" containerID="3e425706ce28b09c3d63a84e9658921cb423491b485b43961ce5f36589fbc46a" Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.064725 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.964262172 podStartE2EDuration="6.064702147s" podCreationTimestamp="2026-01-30 06:48:56 +0000 UTC" firstStartedPulling="2026-01-30 06:48:57.102917546 +0000 UTC m=+6072.472827803" lastFinishedPulling="2026-01-30 06:49:01.203357521 +0000 UTC m=+6076.573267778" observedRunningTime="2026-01-30 06:49:02.036323998 +0000 UTC m=+6077.406234255" watchObservedRunningTime="2026-01-30 06:49:02.064702147 +0000 UTC m=+6077.434612414" Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.083638 4931 scope.go:117] "RemoveContainer" containerID="e6ba02a25b1e0ee2f6ab68e84a5695ffab01e5de9788f62975d6f6c203f6437a" Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.103995 4931 scope.go:117] "RemoveContainer" containerID="4fbc33cf72a98eaa43152775b96af49bda5487d618cfc4a531333efaa8d2b27d" Jan 30 06:49:02 crc kubenswrapper[4931]: I0130 06:49:02.147027 4931 scope.go:117] "RemoveContainer" containerID="27e13b6b085d0725e59ac8bc7078474da43dc2cdd83ba76b6820d4ffc9d8594f" Jan 30 06:49:04 crc kubenswrapper[4931]: I0130 06:49:04.051580 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xqzzz"] Jan 30 06:49:04 crc kubenswrapper[4931]: I0130 06:49:04.068572 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xqzzz"] Jan 30 06:49:05 crc kubenswrapper[4931]: I0130 06:49:05.435327 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d80061de-8d87-4c58-8733-26c5224bf03a" path="/var/lib/kubelet/pods/d80061de-8d87-4c58-8733-26c5224bf03a/volumes" Jan 30 06:49:06 crc kubenswrapper[4931]: I0130 06:49:06.993183 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 30 06:49:06 crc kubenswrapper[4931]: I0130 06:49:06.997982 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 30 06:49:07 crc kubenswrapper[4931]: I0130 06:49:07.090473 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.383873 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-fmv6s"] Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.385608 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.398679 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fmv6s"] Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.481326 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-cb1e-account-create-update-6drdw"] Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.482792 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.485761 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.492951 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-cb1e-account-create-update-6drdw"] Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.515893 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681b527a-d511-4db8-8f19-1df02bbf9f61-operator-scripts\") pod \"aodh-db-create-fmv6s\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.516098 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfwg\" (UniqueName: \"kubernetes.io/projected/681b527a-d511-4db8-8f19-1df02bbf9f61-kube-api-access-ntfwg\") pod \"aodh-db-create-fmv6s\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.617775 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e6957d-e715-4a84-9952-19f773cfe882-operator-scripts\") pod \"aodh-cb1e-account-create-update-6drdw\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.617975 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681b527a-d511-4db8-8f19-1df02bbf9f61-operator-scripts\") pod \"aodh-db-create-fmv6s\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.618033 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjvsj\" (UniqueName: \"kubernetes.io/projected/51e6957d-e715-4a84-9952-19f773cfe882-kube-api-access-zjvsj\") pod \"aodh-cb1e-account-create-update-6drdw\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.618370 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntfwg\" (UniqueName: \"kubernetes.io/projected/681b527a-d511-4db8-8f19-1df02bbf9f61-kube-api-access-ntfwg\") pod \"aodh-db-create-fmv6s\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.618757 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681b527a-d511-4db8-8f19-1df02bbf9f61-operator-scripts\") pod \"aodh-db-create-fmv6s\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.642298 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntfwg\" (UniqueName: \"kubernetes.io/projected/681b527a-d511-4db8-8f19-1df02bbf9f61-kube-api-access-ntfwg\") pod \"aodh-db-create-fmv6s\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.716454 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.719395 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e6957d-e715-4a84-9952-19f773cfe882-operator-scripts\") pod \"aodh-cb1e-account-create-update-6drdw\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.719542 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjvsj\" (UniqueName: \"kubernetes.io/projected/51e6957d-e715-4a84-9952-19f773cfe882-kube-api-access-zjvsj\") pod \"aodh-cb1e-account-create-update-6drdw\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.720045 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e6957d-e715-4a84-9952-19f773cfe882-operator-scripts\") pod \"aodh-cb1e-account-create-update-6drdw\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.739911 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjvsj\" (UniqueName: \"kubernetes.io/projected/51e6957d-e715-4a84-9952-19f773cfe882-kube-api-access-zjvsj\") pod \"aodh-cb1e-account-create-update-6drdw\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:08 crc kubenswrapper[4931]: I0130 06:49:08.800621 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:09 crc kubenswrapper[4931]: I0130 06:49:09.312671 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fmv6s"] Jan 30 06:49:09 crc kubenswrapper[4931]: W0130 06:49:09.318557 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod681b527a_d511_4db8_8f19_1df02bbf9f61.slice/crio-bfc55abe9e3678fe35af1979b421a6ca3ccdcbf00f9ed457fa2b8d11555bca9d WatchSource:0}: Error finding container bfc55abe9e3678fe35af1979b421a6ca3ccdcbf00f9ed457fa2b8d11555bca9d: Status 404 returned error can't find the container with id bfc55abe9e3678fe35af1979b421a6ca3ccdcbf00f9ed457fa2b8d11555bca9d Jan 30 06:49:09 crc kubenswrapper[4931]: I0130 06:49:09.581298 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-cb1e-account-create-update-6drdw"] Jan 30 06:49:10 crc kubenswrapper[4931]: I0130 06:49:10.136381 4931 generic.go:334] "Generic (PLEG): container finished" podID="681b527a-d511-4db8-8f19-1df02bbf9f61" containerID="80ab6efc7f6dcfb70eed703ea54962d42118f91ddd843c75b9238af6658827ba" exitCode=0 Jan 30 06:49:10 crc kubenswrapper[4931]: I0130 06:49:10.136501 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fmv6s" event={"ID":"681b527a-d511-4db8-8f19-1df02bbf9f61","Type":"ContainerDied","Data":"80ab6efc7f6dcfb70eed703ea54962d42118f91ddd843c75b9238af6658827ba"} Jan 30 06:49:10 crc kubenswrapper[4931]: I0130 06:49:10.136817 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fmv6s" event={"ID":"681b527a-d511-4db8-8f19-1df02bbf9f61","Type":"ContainerStarted","Data":"bfc55abe9e3678fe35af1979b421a6ca3ccdcbf00f9ed457fa2b8d11555bca9d"} Jan 30 06:49:10 crc kubenswrapper[4931]: I0130 06:49:10.139814 4931 generic.go:334] "Generic (PLEG): container finished" podID="51e6957d-e715-4a84-9952-19f773cfe882" containerID="0e4d3615364adb9fc327ac5ce20cdd4fecf281a043a844859ed5dca539ce5720" exitCode=0 Jan 30 06:49:10 crc kubenswrapper[4931]: I0130 06:49:10.139847 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cb1e-account-create-update-6drdw" event={"ID":"51e6957d-e715-4a84-9952-19f773cfe882","Type":"ContainerDied","Data":"0e4d3615364adb9fc327ac5ce20cdd4fecf281a043a844859ed5dca539ce5720"} Jan 30 06:49:10 crc kubenswrapper[4931]: I0130 06:49:10.139866 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cb1e-account-create-update-6drdw" event={"ID":"51e6957d-e715-4a84-9952-19f773cfe882","Type":"ContainerStarted","Data":"36b11eaacb0459b577609608abbf56aefe928116c6f3f0ce4cddffd24a1980f3"} Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.677312 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.710518 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.791769 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e6957d-e715-4a84-9952-19f773cfe882-operator-scripts\") pod \"51e6957d-e715-4a84-9952-19f773cfe882\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.792042 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjvsj\" (UniqueName: \"kubernetes.io/projected/51e6957d-e715-4a84-9952-19f773cfe882-kube-api-access-zjvsj\") pod \"51e6957d-e715-4a84-9952-19f773cfe882\" (UID: \"51e6957d-e715-4a84-9952-19f773cfe882\") " Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.792281 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e6957d-e715-4a84-9952-19f773cfe882-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51e6957d-e715-4a84-9952-19f773cfe882" (UID: "51e6957d-e715-4a84-9952-19f773cfe882"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.792726 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e6957d-e715-4a84-9952-19f773cfe882-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.797674 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e6957d-e715-4a84-9952-19f773cfe882-kube-api-access-zjvsj" (OuterVolumeSpecName: "kube-api-access-zjvsj") pod "51e6957d-e715-4a84-9952-19f773cfe882" (UID: "51e6957d-e715-4a84-9952-19f773cfe882"). InnerVolumeSpecName "kube-api-access-zjvsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.894175 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681b527a-d511-4db8-8f19-1df02bbf9f61-operator-scripts\") pod \"681b527a-d511-4db8-8f19-1df02bbf9f61\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.894320 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntfwg\" (UniqueName: \"kubernetes.io/projected/681b527a-d511-4db8-8f19-1df02bbf9f61-kube-api-access-ntfwg\") pod \"681b527a-d511-4db8-8f19-1df02bbf9f61\" (UID: \"681b527a-d511-4db8-8f19-1df02bbf9f61\") " Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.894890 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjvsj\" (UniqueName: \"kubernetes.io/projected/51e6957d-e715-4a84-9952-19f773cfe882-kube-api-access-zjvsj\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.894951 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681b527a-d511-4db8-8f19-1df02bbf9f61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "681b527a-d511-4db8-8f19-1df02bbf9f61" (UID: "681b527a-d511-4db8-8f19-1df02bbf9f61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.897621 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681b527a-d511-4db8-8f19-1df02bbf9f61-kube-api-access-ntfwg" (OuterVolumeSpecName: "kube-api-access-ntfwg") pod "681b527a-d511-4db8-8f19-1df02bbf9f61" (UID: "681b527a-d511-4db8-8f19-1df02bbf9f61"). InnerVolumeSpecName "kube-api-access-ntfwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.997015 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntfwg\" (UniqueName: \"kubernetes.io/projected/681b527a-d511-4db8-8f19-1df02bbf9f61-kube-api-access-ntfwg\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:11.997371 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/681b527a-d511-4db8-8f19-1df02bbf9f61-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:12.167895 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fmv6s" event={"ID":"681b527a-d511-4db8-8f19-1df02bbf9f61","Type":"ContainerDied","Data":"bfc55abe9e3678fe35af1979b421a6ca3ccdcbf00f9ed457fa2b8d11555bca9d"} Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:12.167939 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmv6s" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:12.167944 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfc55abe9e3678fe35af1979b421a6ca3ccdcbf00f9ed457fa2b8d11555bca9d" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:12.170010 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-cb1e-account-create-update-6drdw" event={"ID":"51e6957d-e715-4a84-9952-19f773cfe882","Type":"ContainerDied","Data":"36b11eaacb0459b577609608abbf56aefe928116c6f3f0ce4cddffd24a1980f3"} Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:12.170049 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b11eaacb0459b577609608abbf56aefe928116c6f3f0ce4cddffd24a1980f3" Jan 30 06:49:12 crc kubenswrapper[4931]: I0130 06:49:12.170066 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-cb1e-account-create-update-6drdw" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.883369 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-rq4fv"] Jan 30 06:49:13 crc kubenswrapper[4931]: E0130 06:49:13.884164 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="681b527a-d511-4db8-8f19-1df02bbf9f61" containerName="mariadb-database-create" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.884181 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="681b527a-d511-4db8-8f19-1df02bbf9f61" containerName="mariadb-database-create" Jan 30 06:49:13 crc kubenswrapper[4931]: E0130 06:49:13.884216 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e6957d-e715-4a84-9952-19f773cfe882" containerName="mariadb-account-create-update" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.884223 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e6957d-e715-4a84-9952-19f773cfe882" containerName="mariadb-account-create-update" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.884480 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="681b527a-d511-4db8-8f19-1df02bbf9f61" containerName="mariadb-database-create" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.884499 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e6957d-e715-4a84-9952-19f773cfe882" containerName="mariadb-account-create-update" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.885379 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.891244 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.893841 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xxpdc" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.893886 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.893952 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.900024 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rq4fv"] Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.964098 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-combined-ca-bundle\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.964151 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-scripts\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.964183 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwl2d\" (UniqueName: \"kubernetes.io/projected/76eec61d-6ff6-4286-9102-758374c6fa27-kube-api-access-qwl2d\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:13 crc kubenswrapper[4931]: I0130 06:49:13.964308 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-config-data\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.065667 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-config-data\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.065740 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-combined-ca-bundle\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.065766 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-scripts\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.065794 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwl2d\" (UniqueName: \"kubernetes.io/projected/76eec61d-6ff6-4286-9102-758374c6fa27-kube-api-access-qwl2d\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.071166 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-scripts\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.071686 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-combined-ca-bundle\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.071719 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-config-data\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.092909 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwl2d\" (UniqueName: \"kubernetes.io/projected/76eec61d-6ff6-4286-9102-758374c6fa27-kube-api-access-qwl2d\") pod \"aodh-db-sync-rq4fv\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.212732 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:14 crc kubenswrapper[4931]: I0130 06:49:14.717018 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rq4fv"] Jan 30 06:49:14 crc kubenswrapper[4931]: W0130 06:49:14.738472 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76eec61d_6ff6_4286_9102_758374c6fa27.slice/crio-ce6b29ec009d92bf4beaaf1185150d3f0a5fc6375f5de7e6f45a3df21d88ab7f WatchSource:0}: Error finding container ce6b29ec009d92bf4beaaf1185150d3f0a5fc6375f5de7e6f45a3df21d88ab7f: Status 404 returned error can't find the container with id ce6b29ec009d92bf4beaaf1185150d3f0a5fc6375f5de7e6f45a3df21d88ab7f Jan 30 06:49:15 crc kubenswrapper[4931]: I0130 06:49:15.205471 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rq4fv" event={"ID":"76eec61d-6ff6-4286-9102-758374c6fa27","Type":"ContainerStarted","Data":"ce6b29ec009d92bf4beaaf1185150d3f0a5fc6375f5de7e6f45a3df21d88ab7f"} Jan 30 06:49:20 crc kubenswrapper[4931]: I0130 06:49:20.266758 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rq4fv" event={"ID":"76eec61d-6ff6-4286-9102-758374c6fa27","Type":"ContainerStarted","Data":"113fabb6410c6ac50615d981ed7da97e3148c88e8fc0cf34f88de6f851a2a62e"} Jan 30 06:49:20 crc kubenswrapper[4931]: I0130 06:49:20.286375 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-rq4fv" podStartSLOduration=2.059646285 podStartE2EDuration="7.286352003s" podCreationTimestamp="2026-01-30 06:49:13 +0000 UTC" firstStartedPulling="2026-01-30 06:49:14.74152729 +0000 UTC m=+6090.111437547" lastFinishedPulling="2026-01-30 06:49:19.968232968 +0000 UTC m=+6095.338143265" observedRunningTime="2026-01-30 06:49:20.280613621 +0000 UTC m=+6095.650523888" watchObservedRunningTime="2026-01-30 06:49:20.286352003 +0000 UTC m=+6095.656262270" Jan 30 06:49:22 crc kubenswrapper[4931]: I0130 06:49:22.301525 4931 generic.go:334] "Generic (PLEG): container finished" podID="76eec61d-6ff6-4286-9102-758374c6fa27" containerID="113fabb6410c6ac50615d981ed7da97e3148c88e8fc0cf34f88de6f851a2a62e" exitCode=0 Jan 30 06:49:22 crc kubenswrapper[4931]: I0130 06:49:22.301686 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rq4fv" event={"ID":"76eec61d-6ff6-4286-9102-758374c6fa27","Type":"ContainerDied","Data":"113fabb6410c6ac50615d981ed7da97e3148c88e8fc0cf34f88de6f851a2a62e"} Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.753853 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.806531 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-config-data\") pod \"76eec61d-6ff6-4286-9102-758374c6fa27\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.806603 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-combined-ca-bundle\") pod \"76eec61d-6ff6-4286-9102-758374c6fa27\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.806746 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwl2d\" (UniqueName: \"kubernetes.io/projected/76eec61d-6ff6-4286-9102-758374c6fa27-kube-api-access-qwl2d\") pod \"76eec61d-6ff6-4286-9102-758374c6fa27\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.806844 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-scripts\") pod \"76eec61d-6ff6-4286-9102-758374c6fa27\" (UID: \"76eec61d-6ff6-4286-9102-758374c6fa27\") " Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.812701 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-scripts" (OuterVolumeSpecName: "scripts") pod "76eec61d-6ff6-4286-9102-758374c6fa27" (UID: "76eec61d-6ff6-4286-9102-758374c6fa27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.812898 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76eec61d-6ff6-4286-9102-758374c6fa27-kube-api-access-qwl2d" (OuterVolumeSpecName: "kube-api-access-qwl2d") pod "76eec61d-6ff6-4286-9102-758374c6fa27" (UID: "76eec61d-6ff6-4286-9102-758374c6fa27"). InnerVolumeSpecName "kube-api-access-qwl2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.835933 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-config-data" (OuterVolumeSpecName: "config-data") pod "76eec61d-6ff6-4286-9102-758374c6fa27" (UID: "76eec61d-6ff6-4286-9102-758374c6fa27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.842568 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76eec61d-6ff6-4286-9102-758374c6fa27" (UID: "76eec61d-6ff6-4286-9102-758374c6fa27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.909360 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.909397 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.909408 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwl2d\" (UniqueName: \"kubernetes.io/projected/76eec61d-6ff6-4286-9102-758374c6fa27-kube-api-access-qwl2d\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:23 crc kubenswrapper[4931]: I0130 06:49:23.909427 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76eec61d-6ff6-4286-9102-758374c6fa27-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:24 crc kubenswrapper[4931]: I0130 06:49:24.329189 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rq4fv" event={"ID":"76eec61d-6ff6-4286-9102-758374c6fa27","Type":"ContainerDied","Data":"ce6b29ec009d92bf4beaaf1185150d3f0a5fc6375f5de7e6f45a3df21d88ab7f"} Jan 30 06:49:24 crc kubenswrapper[4931]: I0130 06:49:24.329232 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce6b29ec009d92bf4beaaf1185150d3f0a5fc6375f5de7e6f45a3df21d88ab7f" Jan 30 06:49:24 crc kubenswrapper[4931]: I0130 06:49:24.329242 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rq4fv" Jan 30 06:49:26 crc kubenswrapper[4931]: I0130 06:49:26.577913 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 06:49:27 crc kubenswrapper[4931]: I0130 06:49:27.363044 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:49:27 crc kubenswrapper[4931]: I0130 06:49:27.363381 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.478289 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 30 06:49:28 crc kubenswrapper[4931]: E0130 06:49:28.479011 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eec61d-6ff6-4286-9102-758374c6fa27" containerName="aodh-db-sync" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.479025 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eec61d-6ff6-4286-9102-758374c6fa27" containerName="aodh-db-sync" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.479212 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="76eec61d-6ff6-4286-9102-758374c6fa27" containerName="aodh-db-sync" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.487636 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.491272 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-xxpdc" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.491404 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.491499 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.494070 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.605218 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc96p\" (UniqueName: \"kubernetes.io/projected/890734fc-018f-4d2e-bc3e-ef4399f477da-kube-api-access-nc96p\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.605468 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-scripts\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.605765 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-combined-ca-bundle\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.605872 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-config-data\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.708911 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-combined-ca-bundle\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.710828 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-config-data\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.710871 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc96p\" (UniqueName: \"kubernetes.io/projected/890734fc-018f-4d2e-bc3e-ef4399f477da-kube-api-access-nc96p\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.710939 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-scripts\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.716045 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-config-data\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.716324 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-combined-ca-bundle\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.720966 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/890734fc-018f-4d2e-bc3e-ef4399f477da-scripts\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.736503 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc96p\" (UniqueName: \"kubernetes.io/projected/890734fc-018f-4d2e-bc3e-ef4399f477da-kube-api-access-nc96p\") pod \"aodh-0\" (UID: \"890734fc-018f-4d2e-bc3e-ef4399f477da\") " pod="openstack/aodh-0" Jan 30 06:49:28 crc kubenswrapper[4931]: I0130 06:49:28.814238 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 30 06:49:29 crc kubenswrapper[4931]: I0130 06:49:29.294897 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 30 06:49:29 crc kubenswrapper[4931]: I0130 06:49:29.303188 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:49:29 crc kubenswrapper[4931]: I0130 06:49:29.379176 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"890734fc-018f-4d2e-bc3e-ef4399f477da","Type":"ContainerStarted","Data":"287f5d40351da171b121ab371f559dd20696a906d00896d0c40736967109d5e3"} Jan 30 06:49:30 crc kubenswrapper[4931]: I0130 06:49:30.390274 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"890734fc-018f-4d2e-bc3e-ef4399f477da","Type":"ContainerStarted","Data":"25e3b66d4c9038129e9edcdc5fafec226a5cbf6fda303811185ce570a2d99c73"} Jan 30 06:49:30 crc kubenswrapper[4931]: I0130 06:49:30.645921 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:49:30 crc kubenswrapper[4931]: I0130 06:49:30.646168 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-central-agent" containerID="cri-o://aed3b001cce71a5012e1a07b41789b31d0ea72897e75f2bf7e3bbd6b03358b00" gracePeriod=30 Jan 30 06:49:30 crc kubenswrapper[4931]: I0130 06:49:30.646318 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-notification-agent" containerID="cri-o://67791474132ea95207d226ce291f362b43314fdb91b8a0365481b75a910cde7a" gracePeriod=30 Jan 30 06:49:30 crc kubenswrapper[4931]: I0130 06:49:30.646320 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="proxy-httpd" containerID="cri-o://12aa02774ebae7d466712369ad3821a4d5645f4c4076a555423c271558b4c479" gracePeriod=30 Jan 30 06:49:30 crc kubenswrapper[4931]: I0130 06:49:30.646548 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="sg-core" containerID="cri-o://dc00b4b18ec22a13fffbcb9088cbc8844fa86ef6d3929196ba622b907b1b6226" gracePeriod=30 Jan 30 06:49:31 crc kubenswrapper[4931]: I0130 06:49:31.403625 4931 generic.go:334] "Generic (PLEG): container finished" podID="1ace174a-c316-432c-82da-840f5e2283d1" containerID="12aa02774ebae7d466712369ad3821a4d5645f4c4076a555423c271558b4c479" exitCode=0 Jan 30 06:49:31 crc kubenswrapper[4931]: I0130 06:49:31.403955 4931 generic.go:334] "Generic (PLEG): container finished" podID="1ace174a-c316-432c-82da-840f5e2283d1" containerID="dc00b4b18ec22a13fffbcb9088cbc8844fa86ef6d3929196ba622b907b1b6226" exitCode=2 Jan 30 06:49:31 crc kubenswrapper[4931]: I0130 06:49:31.403980 4931 generic.go:334] "Generic (PLEG): container finished" podID="1ace174a-c316-432c-82da-840f5e2283d1" containerID="aed3b001cce71a5012e1a07b41789b31d0ea72897e75f2bf7e3bbd6b03358b00" exitCode=0 Jan 30 06:49:31 crc kubenswrapper[4931]: I0130 06:49:31.403710 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerDied","Data":"12aa02774ebae7d466712369ad3821a4d5645f4c4076a555423c271558b4c479"} Jan 30 06:49:31 crc kubenswrapper[4931]: I0130 06:49:31.404021 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerDied","Data":"dc00b4b18ec22a13fffbcb9088cbc8844fa86ef6d3929196ba622b907b1b6226"} Jan 30 06:49:31 crc kubenswrapper[4931]: I0130 06:49:31.404040 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerDied","Data":"aed3b001cce71a5012e1a07b41789b31d0ea72897e75f2bf7e3bbd6b03358b00"} Jan 30 06:49:32 crc kubenswrapper[4931]: I0130 06:49:32.417635 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"890734fc-018f-4d2e-bc3e-ef4399f477da","Type":"ContainerStarted","Data":"9664b2dcf5cec3eb886c37c94885b290b4800534da7ca6b602895c3587b1d543"} Jan 30 06:49:34 crc kubenswrapper[4931]: I0130 06:49:34.463887 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"890734fc-018f-4d2e-bc3e-ef4399f477da","Type":"ContainerStarted","Data":"ee2123c23f00ddba11257e42d25ee445a7354f1e8f5862e132b2412004b6f861"} Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.484773 4931 generic.go:334] "Generic (PLEG): container finished" podID="1ace174a-c316-432c-82da-840f5e2283d1" containerID="67791474132ea95207d226ce291f362b43314fdb91b8a0365481b75a910cde7a" exitCode=0 Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.485275 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerDied","Data":"67791474132ea95207d226ce291f362b43314fdb91b8a0365481b75a910cde7a"} Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.509907 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"890734fc-018f-4d2e-bc3e-ef4399f477da","Type":"ContainerStarted","Data":"57a999a5726ef486c30936fca19673a3ee9e7ca668e958733eb7c4e34f5d925d"} Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.536474 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.638814287 podStartE2EDuration="7.536449121s" podCreationTimestamp="2026-01-30 06:49:28 +0000 UTC" firstStartedPulling="2026-01-30 06:49:29.30284615 +0000 UTC m=+6104.672756417" lastFinishedPulling="2026-01-30 06:49:35.200480994 +0000 UTC m=+6110.570391251" observedRunningTime="2026-01-30 06:49:35.527607433 +0000 UTC m=+6110.897517710" watchObservedRunningTime="2026-01-30 06:49:35.536449121 +0000 UTC m=+6110.906359378" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.600031 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.668166 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-scripts\") pod \"1ace174a-c316-432c-82da-840f5e2283d1\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.668209 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-run-httpd\") pod \"1ace174a-c316-432c-82da-840f5e2283d1\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.668253 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-config-data\") pod \"1ace174a-c316-432c-82da-840f5e2283d1\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.668441 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-sg-core-conf-yaml\") pod \"1ace174a-c316-432c-82da-840f5e2283d1\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.668466 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-combined-ca-bundle\") pod \"1ace174a-c316-432c-82da-840f5e2283d1\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.668500 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-log-httpd\") pod \"1ace174a-c316-432c-82da-840f5e2283d1\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.668518 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95ck\" (UniqueName: \"kubernetes.io/projected/1ace174a-c316-432c-82da-840f5e2283d1-kube-api-access-j95ck\") pod \"1ace174a-c316-432c-82da-840f5e2283d1\" (UID: \"1ace174a-c316-432c-82da-840f5e2283d1\") " Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.669742 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1ace174a-c316-432c-82da-840f5e2283d1" (UID: "1ace174a-c316-432c-82da-840f5e2283d1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.670076 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1ace174a-c316-432c-82da-840f5e2283d1" (UID: "1ace174a-c316-432c-82da-840f5e2283d1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.675051 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-scripts" (OuterVolumeSpecName: "scripts") pod "1ace174a-c316-432c-82da-840f5e2283d1" (UID: "1ace174a-c316-432c-82da-840f5e2283d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.682822 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ace174a-c316-432c-82da-840f5e2283d1-kube-api-access-j95ck" (OuterVolumeSpecName: "kube-api-access-j95ck") pod "1ace174a-c316-432c-82da-840f5e2283d1" (UID: "1ace174a-c316-432c-82da-840f5e2283d1"). InnerVolumeSpecName "kube-api-access-j95ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.714667 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1ace174a-c316-432c-82da-840f5e2283d1" (UID: "1ace174a-c316-432c-82da-840f5e2283d1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.767647 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ace174a-c316-432c-82da-840f5e2283d1" (UID: "1ace174a-c316-432c-82da-840f5e2283d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.770725 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.770763 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.770775 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.770789 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.770800 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ace174a-c316-432c-82da-840f5e2283d1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.770811 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95ck\" (UniqueName: \"kubernetes.io/projected/1ace174a-c316-432c-82da-840f5e2283d1-kube-api-access-j95ck\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.787695 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-config-data" (OuterVolumeSpecName: "config-data") pod "1ace174a-c316-432c-82da-840f5e2283d1" (UID: "1ace174a-c316-432c-82da-840f5e2283d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:49:35 crc kubenswrapper[4931]: I0130 06:49:35.872696 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ace174a-c316-432c-82da-840f5e2283d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.521087 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.523278 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ace174a-c316-432c-82da-840f5e2283d1","Type":"ContainerDied","Data":"8f0ee7d141f630c7e73df9127c0fee9f75d55fe3a9ad54a729725d13272a81b3"} Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.523323 4931 scope.go:117] "RemoveContainer" containerID="12aa02774ebae7d466712369ad3821a4d5645f4c4076a555423c271558b4c479" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.545316 4931 scope.go:117] "RemoveContainer" containerID="dc00b4b18ec22a13fffbcb9088cbc8844fa86ef6d3929196ba622b907b1b6226" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.571942 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.573599 4931 scope.go:117] "RemoveContainer" containerID="67791474132ea95207d226ce291f362b43314fdb91b8a0365481b75a910cde7a" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.595315 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.600058 4931 scope.go:117] "RemoveContainer" containerID="aed3b001cce71a5012e1a07b41789b31d0ea72897e75f2bf7e3bbd6b03358b00" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.605323 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:49:36 crc kubenswrapper[4931]: E0130 06:49:36.605855 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="proxy-httpd" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.605872 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="proxy-httpd" Jan 30 06:49:36 crc kubenswrapper[4931]: E0130 06:49:36.605902 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-central-agent" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.605908 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-central-agent" Jan 30 06:49:36 crc kubenswrapper[4931]: E0130 06:49:36.605925 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-notification-agent" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.605932 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-notification-agent" Jan 30 06:49:36 crc kubenswrapper[4931]: E0130 06:49:36.605943 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="sg-core" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.605950 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="sg-core" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.606135 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-central-agent" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.606164 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="proxy-httpd" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.606174 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="ceilometer-notification-agent" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.606184 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ace174a-c316-432c-82da-840f5e2283d1" containerName="sg-core" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.608097 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.613849 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.614736 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.631169 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.687890 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-run-httpd\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.687929 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.687968 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-scripts\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.688008 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.688031 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-config-data\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.688087 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-log-httpd\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.688108 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxwx\" (UniqueName: \"kubernetes.io/projected/83416e39-1feb-47a7-9e5d-748122bed281-kube-api-access-pqxwx\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.789753 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.789822 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-config-data\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.789905 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-log-httpd\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.789929 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqxwx\" (UniqueName: \"kubernetes.io/projected/83416e39-1feb-47a7-9e5d-748122bed281-kube-api-access-pqxwx\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.790049 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-run-httpd\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.790071 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.790128 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-scripts\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.790751 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-log-httpd\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.790790 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-run-httpd\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.796043 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.796850 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-scripts\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.797750 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-config-data\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.811793 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqxwx\" (UniqueName: \"kubernetes.io/projected/83416e39-1feb-47a7-9e5d-748122bed281-kube-api-access-pqxwx\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.817619 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " pod="openstack/ceilometer-0" Jan 30 06:49:36 crc kubenswrapper[4931]: I0130 06:49:36.932728 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:49:37 crc kubenswrapper[4931]: I0130 06:49:37.438143 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ace174a-c316-432c-82da-840f5e2283d1" path="/var/lib/kubelet/pods/1ace174a-c316-432c-82da-840f5e2283d1/volumes" Jan 30 06:49:37 crc kubenswrapper[4931]: I0130 06:49:37.442481 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:49:37 crc kubenswrapper[4931]: I0130 06:49:37.530813 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerStarted","Data":"c52383b65f03eb04f4d7782d65ea62f04f107a518cccdbc33d3de6808caeee13"} Jan 30 06:49:38 crc kubenswrapper[4931]: I0130 06:49:38.541016 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerStarted","Data":"e6075bddf7d9e844cbcadfacf5366aedab5b6c0869d55610da59bca13b8f2e10"} Jan 30 06:49:39 crc kubenswrapper[4931]: I0130 06:49:39.552034 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerStarted","Data":"01bff7fca0dad20c19afe6ca1af5b694418bb46bbfaf5130445564c32e038144"} Jan 30 06:49:42 crc kubenswrapper[4931]: I0130 06:49:42.580325 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerStarted","Data":"d6ffd18a5fcda569a257ea582064d96c557e419380bbc5b6e6ffba7dcc0e2a22"} Jan 30 06:49:44 crc kubenswrapper[4931]: I0130 06:49:44.625265 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerStarted","Data":"c0fc86cc25abacab572b554b477bbaff87a06a333e44043d43cc5be409f8a89e"} Jan 30 06:49:44 crc kubenswrapper[4931]: I0130 06:49:44.625868 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 06:49:44 crc kubenswrapper[4931]: I0130 06:49:44.656038 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.995757448 podStartE2EDuration="8.65602008s" podCreationTimestamp="2026-01-30 06:49:36 +0000 UTC" firstStartedPulling="2026-01-30 06:49:37.45193214 +0000 UTC m=+6112.821842397" lastFinishedPulling="2026-01-30 06:49:44.112194772 +0000 UTC m=+6119.482105029" observedRunningTime="2026-01-30 06:49:44.649940389 +0000 UTC m=+6120.019850646" watchObservedRunningTime="2026-01-30 06:49:44.65602008 +0000 UTC m=+6120.025930337" Jan 30 06:49:44 crc kubenswrapper[4931]: I0130 06:49:44.992963 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-45ct9"] Jan 30 06:49:44 crc kubenswrapper[4931]: I0130 06:49:44.995948 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.008486 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-45ct9"] Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.097224 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-65c4-account-create-update-rfndg"] Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.099068 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.101238 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.107330 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-65c4-account-create-update-rfndg"] Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.110625 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2w5m\" (UniqueName: \"kubernetes.io/projected/448719bb-ff8e-4d9e-982b-a8425f907a15-kube-api-access-l2w5m\") pod \"manila-db-create-45ct9\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.110754 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448719bb-ff8e-4d9e-982b-a8425f907a15-operator-scripts\") pod \"manila-db-create-45ct9\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.212721 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2w5m\" (UniqueName: \"kubernetes.io/projected/448719bb-ff8e-4d9e-982b-a8425f907a15-kube-api-access-l2w5m\") pod \"manila-db-create-45ct9\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.212817 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448719bb-ff8e-4d9e-982b-a8425f907a15-operator-scripts\") pod \"manila-db-create-45ct9\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.212929 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037161b5-dad9-4d8f-9be4-f980ee947129-operator-scripts\") pod \"manila-65c4-account-create-update-rfndg\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.212958 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gvwc\" (UniqueName: \"kubernetes.io/projected/037161b5-dad9-4d8f-9be4-f980ee947129-kube-api-access-6gvwc\") pod \"manila-65c4-account-create-update-rfndg\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.213810 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448719bb-ff8e-4d9e-982b-a8425f907a15-operator-scripts\") pod \"manila-db-create-45ct9\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.232014 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2w5m\" (UniqueName: \"kubernetes.io/projected/448719bb-ff8e-4d9e-982b-a8425f907a15-kube-api-access-l2w5m\") pod \"manila-db-create-45ct9\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.314374 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037161b5-dad9-4d8f-9be4-f980ee947129-operator-scripts\") pod \"manila-65c4-account-create-update-rfndg\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.314538 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gvwc\" (UniqueName: \"kubernetes.io/projected/037161b5-dad9-4d8f-9be4-f980ee947129-kube-api-access-6gvwc\") pod \"manila-65c4-account-create-update-rfndg\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.315574 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037161b5-dad9-4d8f-9be4-f980ee947129-operator-scripts\") pod \"manila-65c4-account-create-update-rfndg\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.334689 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-45ct9" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.353254 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gvwc\" (UniqueName: \"kubernetes.io/projected/037161b5-dad9-4d8f-9be4-f980ee947129-kube-api-access-6gvwc\") pod \"manila-65c4-account-create-update-rfndg\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.420387 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:45 crc kubenswrapper[4931]: I0130 06:49:45.814910 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-45ct9"] Jan 30 06:49:46 crc kubenswrapper[4931]: W0130 06:49:46.180398 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod037161b5_dad9_4d8f_9be4_f980ee947129.slice/crio-88f853b1d6e7fa7789617a86c9ca6cda2ea7a5e89d27946d5df194f470ea3b57 WatchSource:0}: Error finding container 88f853b1d6e7fa7789617a86c9ca6cda2ea7a5e89d27946d5df194f470ea3b57: Status 404 returned error can't find the container with id 88f853b1d6e7fa7789617a86c9ca6cda2ea7a5e89d27946d5df194f470ea3b57 Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.185647 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-65c4-account-create-update-rfndg"] Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.185789 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.675477 4931 generic.go:334] "Generic (PLEG): container finished" podID="037161b5-dad9-4d8f-9be4-f980ee947129" containerID="7c8becae24c7a8a33bf584e1ab34512a30cd0f1208b8f42cb257da9c6245e6c8" exitCode=0 Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.675688 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-65c4-account-create-update-rfndg" event={"ID":"037161b5-dad9-4d8f-9be4-f980ee947129","Type":"ContainerDied","Data":"7c8becae24c7a8a33bf584e1ab34512a30cd0f1208b8f42cb257da9c6245e6c8"} Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.675820 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-65c4-account-create-update-rfndg" event={"ID":"037161b5-dad9-4d8f-9be4-f980ee947129","Type":"ContainerStarted","Data":"88f853b1d6e7fa7789617a86c9ca6cda2ea7a5e89d27946d5df194f470ea3b57"} Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.678715 4931 generic.go:334] "Generic (PLEG): container finished" podID="448719bb-ff8e-4d9e-982b-a8425f907a15" containerID="030be6de81f263d984b02a8d10e7722844ea7978d675c59a14a66ccbbd2666b2" exitCode=0 Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.678750 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-45ct9" event={"ID":"448719bb-ff8e-4d9e-982b-a8425f907a15","Type":"ContainerDied","Data":"030be6de81f263d984b02a8d10e7722844ea7978d675c59a14a66ccbbd2666b2"} Jan 30 06:49:46 crc kubenswrapper[4931]: I0130 06:49:46.678769 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-45ct9" event={"ID":"448719bb-ff8e-4d9e-982b-a8425f907a15","Type":"ContainerStarted","Data":"e434123ad5b1fb377d0c8862f9eb8db04393395119cd0b0bbe793eaf79cbbfc2"} Jan 30 06:49:47 crc kubenswrapper[4931]: I0130 06:49:47.042986 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f264-account-create-update-hm2jw"] Jan 30 06:49:47 crc kubenswrapper[4931]: I0130 06:49:47.052223 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7fmgw"] Jan 30 06:49:47 crc kubenswrapper[4931]: I0130 06:49:47.062666 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7fmgw"] Jan 30 06:49:47 crc kubenswrapper[4931]: I0130 06:49:47.070690 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f264-account-create-update-hm2jw"] Jan 30 06:49:47 crc kubenswrapper[4931]: I0130 06:49:47.438697 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44bda186-cc7a-4422-8266-5f494795cf7f" path="/var/lib/kubelet/pods/44bda186-cc7a-4422-8266-5f494795cf7f/volumes" Jan 30 06:49:47 crc kubenswrapper[4931]: I0130 06:49:47.439427 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7909a37-a194-4666-b642-8193c2b8e29c" path="/var/lib/kubelet/pods/a7909a37-a194-4666-b642-8193c2b8e29c/volumes" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.207421 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.215921 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-45ct9" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.283535 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gvwc\" (UniqueName: \"kubernetes.io/projected/037161b5-dad9-4d8f-9be4-f980ee947129-kube-api-access-6gvwc\") pod \"037161b5-dad9-4d8f-9be4-f980ee947129\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.283940 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037161b5-dad9-4d8f-9be4-f980ee947129-operator-scripts\") pod \"037161b5-dad9-4d8f-9be4-f980ee947129\" (UID: \"037161b5-dad9-4d8f-9be4-f980ee947129\") " Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.284781 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037161b5-dad9-4d8f-9be4-f980ee947129-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "037161b5-dad9-4d8f-9be4-f980ee947129" (UID: "037161b5-dad9-4d8f-9be4-f980ee947129"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.292426 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037161b5-dad9-4d8f-9be4-f980ee947129-kube-api-access-6gvwc" (OuterVolumeSpecName: "kube-api-access-6gvwc") pod "037161b5-dad9-4d8f-9be4-f980ee947129" (UID: "037161b5-dad9-4d8f-9be4-f980ee947129"). InnerVolumeSpecName "kube-api-access-6gvwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.387080 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448719bb-ff8e-4d9e-982b-a8425f907a15-operator-scripts\") pod \"448719bb-ff8e-4d9e-982b-a8425f907a15\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.387894 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2w5m\" (UniqueName: \"kubernetes.io/projected/448719bb-ff8e-4d9e-982b-a8425f907a15-kube-api-access-l2w5m\") pod \"448719bb-ff8e-4d9e-982b-a8425f907a15\" (UID: \"448719bb-ff8e-4d9e-982b-a8425f907a15\") " Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.388849 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/037161b5-dad9-4d8f-9be4-f980ee947129-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.388868 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gvwc\" (UniqueName: \"kubernetes.io/projected/037161b5-dad9-4d8f-9be4-f980ee947129-kube-api-access-6gvwc\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.389300 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448719bb-ff8e-4d9e-982b-a8425f907a15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "448719bb-ff8e-4d9e-982b-a8425f907a15" (UID: "448719bb-ff8e-4d9e-982b-a8425f907a15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.403177 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448719bb-ff8e-4d9e-982b-a8425f907a15-kube-api-access-l2w5m" (OuterVolumeSpecName: "kube-api-access-l2w5m") pod "448719bb-ff8e-4d9e-982b-a8425f907a15" (UID: "448719bb-ff8e-4d9e-982b-a8425f907a15"). InnerVolumeSpecName "kube-api-access-l2w5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.491036 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2w5m\" (UniqueName: \"kubernetes.io/projected/448719bb-ff8e-4d9e-982b-a8425f907a15-kube-api-access-l2w5m\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.491075 4931 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/448719bb-ff8e-4d9e-982b-a8425f907a15-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.698399 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-65c4-account-create-update-rfndg" event={"ID":"037161b5-dad9-4d8f-9be4-f980ee947129","Type":"ContainerDied","Data":"88f853b1d6e7fa7789617a86c9ca6cda2ea7a5e89d27946d5df194f470ea3b57"} Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.698477 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f853b1d6e7fa7789617a86c9ca6cda2ea7a5e89d27946d5df194f470ea3b57" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.698610 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-65c4-account-create-update-rfndg" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.708231 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-45ct9" event={"ID":"448719bb-ff8e-4d9e-982b-a8425f907a15","Type":"ContainerDied","Data":"e434123ad5b1fb377d0c8862f9eb8db04393395119cd0b0bbe793eaf79cbbfc2"} Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.708272 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e434123ad5b1fb377d0c8862f9eb8db04393395119cd0b0bbe793eaf79cbbfc2" Jan 30 06:49:48 crc kubenswrapper[4931]: I0130 06:49:48.708343 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-45ct9" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.687132 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-ctjj7"] Jan 30 06:49:50 crc kubenswrapper[4931]: E0130 06:49:50.687942 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037161b5-dad9-4d8f-9be4-f980ee947129" containerName="mariadb-account-create-update" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.687960 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="037161b5-dad9-4d8f-9be4-f980ee947129" containerName="mariadb-account-create-update" Jan 30 06:49:50 crc kubenswrapper[4931]: E0130 06:49:50.687975 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448719bb-ff8e-4d9e-982b-a8425f907a15" containerName="mariadb-database-create" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.687982 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="448719bb-ff8e-4d9e-982b-a8425f907a15" containerName="mariadb-database-create" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.688240 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="037161b5-dad9-4d8f-9be4-f980ee947129" containerName="mariadb-account-create-update" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.688267 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="448719bb-ff8e-4d9e-982b-a8425f907a15" containerName="mariadb-database-create" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.689372 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.693772 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-h2dqk" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.706371 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ctjj7"] Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.711849 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.842144 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-config-data\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.842637 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-combined-ca-bundle\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.842887 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-job-config-data\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.843010 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gktd8\" (UniqueName: \"kubernetes.io/projected/2f518288-3c69-4f3a-9e32-9f9211cab22a-kube-api-access-gktd8\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.944918 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-config-data\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.945014 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-combined-ca-bundle\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.945061 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-job-config-data\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.945087 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gktd8\" (UniqueName: \"kubernetes.io/projected/2f518288-3c69-4f3a-9e32-9f9211cab22a-kube-api-access-gktd8\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.960198 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-combined-ca-bundle\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.960988 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-job-config-data\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.961045 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-config-data\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:50 crc kubenswrapper[4931]: I0130 06:49:50.973315 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gktd8\" (UniqueName: \"kubernetes.io/projected/2f518288-3c69-4f3a-9e32-9f9211cab22a-kube-api-access-gktd8\") pod \"manila-db-sync-ctjj7\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:51 crc kubenswrapper[4931]: I0130 06:49:51.010858 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ctjj7" Jan 30 06:49:52 crc kubenswrapper[4931]: W0130 06:49:52.106348 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f518288_3c69_4f3a_9e32_9f9211cab22a.slice/crio-166e1550f8f34b4565cb40cded79456bbdb37c4ffea08a4b6b17690ebabe67e1 WatchSource:0}: Error finding container 166e1550f8f34b4565cb40cded79456bbdb37c4ffea08a4b6b17690ebabe67e1: Status 404 returned error can't find the container with id 166e1550f8f34b4565cb40cded79456bbdb37c4ffea08a4b6b17690ebabe67e1 Jan 30 06:49:52 crc kubenswrapper[4931]: I0130 06:49:52.109179 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-ctjj7"] Jan 30 06:49:52 crc kubenswrapper[4931]: I0130 06:49:52.751351 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ctjj7" event={"ID":"2f518288-3c69-4f3a-9e32-9f9211cab22a","Type":"ContainerStarted","Data":"166e1550f8f34b4565cb40cded79456bbdb37c4ffea08a4b6b17690ebabe67e1"} Jan 30 06:49:55 crc kubenswrapper[4931]: I0130 06:49:55.058688 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tfpsx"] Jan 30 06:49:55 crc kubenswrapper[4931]: I0130 06:49:55.074398 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tfpsx"] Jan 30 06:49:55 crc kubenswrapper[4931]: I0130 06:49:55.436353 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0677a1-9051-4719-9e4e-142694e6683a" path="/var/lib/kubelet/pods/de0677a1-9051-4719-9e4e-142694e6683a/volumes" Jan 30 06:49:56 crc kubenswrapper[4931]: I0130 06:49:56.897332 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j6wq2"] Jan 30 06:49:56 crc kubenswrapper[4931]: I0130 06:49:56.935469 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:56 crc kubenswrapper[4931]: I0130 06:49:56.980288 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6wq2"] Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.077123 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngkp9\" (UniqueName: \"kubernetes.io/projected/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-kube-api-access-ngkp9\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.077232 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-catalog-content\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.077502 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-utilities\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.179265 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngkp9\" (UniqueName: \"kubernetes.io/projected/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-kube-api-access-ngkp9\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.179470 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-catalog-content\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.179693 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-utilities\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.179907 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-catalog-content\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.180122 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-utilities\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.203708 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngkp9\" (UniqueName: \"kubernetes.io/projected/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-kube-api-access-ngkp9\") pod \"community-operators-j6wq2\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.271620 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.363440 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:49:57 crc kubenswrapper[4931]: I0130 06:49:57.363499 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:50:01 crc kubenswrapper[4931]: I0130 06:50:01.752508 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6wq2"] Jan 30 06:50:01 crc kubenswrapper[4931]: W0130 06:50:01.758470 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b12adab_30ea_4122_9ea7_0c8b2fdb117c.slice/crio-a18b31c9c3fa680e8d93ca8737e2041057cbd5a166876584d210ca053654abaf WatchSource:0}: Error finding container a18b31c9c3fa680e8d93ca8737e2041057cbd5a166876584d210ca053654abaf: Status 404 returned error can't find the container with id a18b31c9c3fa680e8d93ca8737e2041057cbd5a166876584d210ca053654abaf Jan 30 06:50:01 crc kubenswrapper[4931]: I0130 06:50:01.837201 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6wq2" event={"ID":"2b12adab-30ea-4122-9ea7-0c8b2fdb117c","Type":"ContainerStarted","Data":"a18b31c9c3fa680e8d93ca8737e2041057cbd5a166876584d210ca053654abaf"} Jan 30 06:50:02 crc kubenswrapper[4931]: I0130 06:50:02.337012 4931 scope.go:117] "RemoveContainer" containerID="956ce554bd663761599c9dc4f978e7719f40043720c3d10db30cc18c76ff6127" Jan 30 06:50:02 crc kubenswrapper[4931]: I0130 06:50:02.412742 4931 scope.go:117] "RemoveContainer" containerID="cff7e0d64b5667667e85a8a7d8d6d557567a72e224933981bb30fb75cc9c37a5" Jan 30 06:50:02 crc kubenswrapper[4931]: I0130 06:50:02.505038 4931 scope.go:117] "RemoveContainer" containerID="18638ce6c93ee0d191ba3ee6b587a88fb1ae5413bad9f52cba0bc5cd608d3a29" Jan 30 06:50:02 crc kubenswrapper[4931]: I0130 06:50:02.571728 4931 scope.go:117] "RemoveContainer" containerID="3f61c0f02fd2c2ced024d1e703805185ca2f7ee2e42863e0e8a06a4f812766d2" Jan 30 06:50:02 crc kubenswrapper[4931]: I0130 06:50:02.605772 4931 scope.go:117] "RemoveContainer" containerID="ad91a28e445938a8582000a48ddcd232576020a5c15a4a29af6e45aaf8531507" Jan 30 06:50:02 crc kubenswrapper[4931]: I0130 06:50:02.849524 4931 generic.go:334] "Generic (PLEG): container finished" podID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerID="b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d" exitCode=0 Jan 30 06:50:02 crc kubenswrapper[4931]: I0130 06:50:02.849566 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6wq2" event={"ID":"2b12adab-30ea-4122-9ea7-0c8b2fdb117c","Type":"ContainerDied","Data":"b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d"} Jan 30 06:50:03 crc kubenswrapper[4931]: I0130 06:50:03.862719 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ctjj7" event={"ID":"2f518288-3c69-4f3a-9e32-9f9211cab22a","Type":"ContainerStarted","Data":"e300d33068406baea942af2b5b021d10a35ce639099354dd534b82d9b9278f4c"} Jan 30 06:50:03 crc kubenswrapper[4931]: I0130 06:50:03.885101 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-ctjj7" podStartSLOduration=3.777201205 podStartE2EDuration="13.885075983s" podCreationTimestamp="2026-01-30 06:49:50 +0000 UTC" firstStartedPulling="2026-01-30 06:49:52.113507723 +0000 UTC m=+6127.483418000" lastFinishedPulling="2026-01-30 06:50:02.221382481 +0000 UTC m=+6137.591292778" observedRunningTime="2026-01-30 06:50:03.876922184 +0000 UTC m=+6139.246832441" watchObservedRunningTime="2026-01-30 06:50:03.885075983 +0000 UTC m=+6139.254986250" Jan 30 06:50:05 crc kubenswrapper[4931]: I0130 06:50:05.885346 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6wq2" event={"ID":"2b12adab-30ea-4122-9ea7-0c8b2fdb117c","Type":"ContainerStarted","Data":"681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4"} Jan 30 06:50:06 crc kubenswrapper[4931]: I0130 06:50:06.980900 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 06:50:07 crc kubenswrapper[4931]: I0130 06:50:07.908639 4931 generic.go:334] "Generic (PLEG): container finished" podID="2f518288-3c69-4f3a-9e32-9f9211cab22a" containerID="e300d33068406baea942af2b5b021d10a35ce639099354dd534b82d9b9278f4c" exitCode=0 Jan 30 06:50:07 crc kubenswrapper[4931]: I0130 06:50:07.908706 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ctjj7" event={"ID":"2f518288-3c69-4f3a-9e32-9f9211cab22a","Type":"ContainerDied","Data":"e300d33068406baea942af2b5b021d10a35ce639099354dd534b82d9b9278f4c"} Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.479116 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ctjj7" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.595782 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-config-data\") pod \"2f518288-3c69-4f3a-9e32-9f9211cab22a\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.595853 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-combined-ca-bundle\") pod \"2f518288-3c69-4f3a-9e32-9f9211cab22a\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.595917 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gktd8\" (UniqueName: \"kubernetes.io/projected/2f518288-3c69-4f3a-9e32-9f9211cab22a-kube-api-access-gktd8\") pod \"2f518288-3c69-4f3a-9e32-9f9211cab22a\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.596068 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-job-config-data\") pod \"2f518288-3c69-4f3a-9e32-9f9211cab22a\" (UID: \"2f518288-3c69-4f3a-9e32-9f9211cab22a\") " Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.610631 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f518288-3c69-4f3a-9e32-9f9211cab22a-kube-api-access-gktd8" (OuterVolumeSpecName: "kube-api-access-gktd8") pod "2f518288-3c69-4f3a-9e32-9f9211cab22a" (UID: "2f518288-3c69-4f3a-9e32-9f9211cab22a"). InnerVolumeSpecName "kube-api-access-gktd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.613375 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-config-data" (OuterVolumeSpecName: "config-data") pod "2f518288-3c69-4f3a-9e32-9f9211cab22a" (UID: "2f518288-3c69-4f3a-9e32-9f9211cab22a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.618958 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "2f518288-3c69-4f3a-9e32-9f9211cab22a" (UID: "2f518288-3c69-4f3a-9e32-9f9211cab22a"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.635492 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f518288-3c69-4f3a-9e32-9f9211cab22a" (UID: "2f518288-3c69-4f3a-9e32-9f9211cab22a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.698137 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.698174 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gktd8\" (UniqueName: \"kubernetes.io/projected/2f518288-3c69-4f3a-9e32-9f9211cab22a-kube-api-access-gktd8\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.698184 4931 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.698195 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f518288-3c69-4f3a-9e32-9f9211cab22a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.926849 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-ctjj7" event={"ID":"2f518288-3c69-4f3a-9e32-9f9211cab22a","Type":"ContainerDied","Data":"166e1550f8f34b4565cb40cded79456bbdb37c4ffea08a4b6b17690ebabe67e1"} Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.927200 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="166e1550f8f34b4565cb40cded79456bbdb37c4ffea08a4b6b17690ebabe67e1" Jan 30 06:50:09 crc kubenswrapper[4931]: I0130 06:50:09.926885 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-ctjj7" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.587475 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 30 06:50:10 crc kubenswrapper[4931]: E0130 06:50:10.587911 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f518288-3c69-4f3a-9e32-9f9211cab22a" containerName="manila-db-sync" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.587923 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f518288-3c69-4f3a-9e32-9f9211cab22a" containerName="manila-db-sync" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.588137 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f518288-3c69-4f3a-9e32-9f9211cab22a" containerName="manila-db-sync" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.589242 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.598819 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59977785bf-q4vw9"] Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.599149 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.599295 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-h2dqk" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.599315 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.599489 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.600610 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.607359 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.609182 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.617479 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.617726 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.635369 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.688023 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59977785bf-q4vw9"] Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.720586 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-dns-svc\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.720940 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.720969 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721066 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721157 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-config\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721204 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-config-data\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721241 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-scripts\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721266 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721289 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721310 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721340 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-scripts\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721368 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d28zr\" (UniqueName: \"kubernetes.io/projected/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-kube-api-access-d28zr\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721402 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-config-data\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721449 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-ovsdbserver-sb\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721480 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m6c7\" (UniqueName: \"kubernetes.io/projected/b05cd1de-6848-4de5-92f4-399913835db3-kube-api-access-6m6c7\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721522 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstx2\" (UniqueName: \"kubernetes.io/projected/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-kube-api-access-pstx2\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721600 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-ceph\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721643 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-ovsdbserver-nb\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.721666 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.765960 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.767906 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.772207 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.781659 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.822885 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pstx2\" (UniqueName: \"kubernetes.io/projected/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-kube-api-access-pstx2\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.822955 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-config-data-custom\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823013 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-etc-machine-id\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823033 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-ceph\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823065 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-ovsdbserver-nb\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823080 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823125 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-logs\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823143 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-dns-svc\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823165 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823182 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823208 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823244 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-config\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823273 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-config-data\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823296 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-scripts\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823313 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823330 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823346 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823362 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823379 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-scripts\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823396 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-scripts\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823416 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d28zr\" (UniqueName: \"kubernetes.io/projected/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-kube-api-access-d28zr\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823446 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qrzc\" (UniqueName: \"kubernetes.io/projected/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-kube-api-access-9qrzc\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823464 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-config-data\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823479 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-config-data\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823499 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-ovsdbserver-sb\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.823525 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m6c7\" (UniqueName: \"kubernetes.io/projected/b05cd1de-6848-4de5-92f4-399913835db3-kube-api-access-6m6c7\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.825120 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-ovsdbserver-nb\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.826673 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-ovsdbserver-sb\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.827219 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-config\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.827354 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05cd1de-6848-4de5-92f4-399913835db3-dns-svc\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.827487 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.827527 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.828153 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.831067 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-scripts\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.833145 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.834700 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-config-data\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.839660 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m6c7\" (UniqueName: \"kubernetes.io/projected/b05cd1de-6848-4de5-92f4-399913835db3-kube-api-access-6m6c7\") pod \"dnsmasq-dns-59977785bf-q4vw9\" (UID: \"b05cd1de-6848-4de5-92f4-399913835db3\") " pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.840052 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.844127 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-ceph\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.844653 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-config-data\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.845888 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-scripts\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.846360 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.847975 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.856607 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d28zr\" (UniqueName: \"kubernetes.io/projected/3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70-kube-api-access-d28zr\") pod \"manila-share-share1-0\" (UID: \"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70\") " pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.875818 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pstx2\" (UniqueName: \"kubernetes.io/projected/b9eb637a-1c6e-47f5-87ec-fa28c244db0b-kube-api-access-pstx2\") pod \"manila-scheduler-0\" (UID: \"b9eb637a-1c6e-47f5-87ec-fa28c244db0b\") " pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.923453 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936016 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-logs\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936173 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936205 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-scripts\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936246 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qrzc\" (UniqueName: \"kubernetes.io/projected/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-kube-api-access-9qrzc\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936269 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-config-data\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936317 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-config-data-custom\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936376 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-etc-machine-id\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936394 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-logs\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936583 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.936620 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-etc-machine-id\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.939575 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-scripts\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.939951 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.940268 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-config-data-custom\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.941827 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-config-data\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.945340 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 30 06:50:10 crc kubenswrapper[4931]: I0130 06:50:10.957394 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qrzc\" (UniqueName: \"kubernetes.io/projected/50c75c49-3fc8-4f3e-9af2-66535e3b49a9-kube-api-access-9qrzc\") pod \"manila-api-0\" (UID: \"50c75c49-3fc8-4f3e-9af2-66535e3b49a9\") " pod="openstack/manila-api-0" Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.092559 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.529287 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.591956 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59977785bf-q4vw9"] Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.781185 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.949718 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" event={"ID":"b05cd1de-6848-4de5-92f4-399913835db3","Type":"ContainerStarted","Data":"89d6e80860900a2a9ba9ceaeb6a787c29072ba5d410dfd6349fc20a33ed3e5a9"} Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.949768 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" event={"ID":"b05cd1de-6848-4de5-92f4-399913835db3","Type":"ContainerStarted","Data":"3cd0c7d307be633a0d3b62ce55290c53bc46cd5e7f5cfbff1dd13256a3f46de7"} Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.952161 4931 generic.go:334] "Generic (PLEG): container finished" podID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerID="681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4" exitCode=0 Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.952256 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6wq2" event={"ID":"2b12adab-30ea-4122-9ea7-0c8b2fdb117c","Type":"ContainerDied","Data":"681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4"} Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.953168 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"b9eb637a-1c6e-47f5-87ec-fa28c244db0b","Type":"ContainerStarted","Data":"c197a3157a7e967f47a4986f2f525f9a378e4435446719644353b51b8309043a"} Jan 30 06:50:11 crc kubenswrapper[4931]: I0130 06:50:11.954794 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70","Type":"ContainerStarted","Data":"3c61c439f05978a8c0bd0c74057137c1fb75b1562304656a69e25d66548a55b8"} Jan 30 06:50:12 crc kubenswrapper[4931]: I0130 06:50:12.847454 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 30 06:50:12 crc kubenswrapper[4931]: I0130 06:50:12.969305 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"50c75c49-3fc8-4f3e-9af2-66535e3b49a9","Type":"ContainerStarted","Data":"9a5e78170ba9979b37144bb6c69619a96fe71ec83f7f7ae1c8aa04dd333aa696"} Jan 30 06:50:12 crc kubenswrapper[4931]: I0130 06:50:12.970796 4931 generic.go:334] "Generic (PLEG): container finished" podID="b05cd1de-6848-4de5-92f4-399913835db3" containerID="89d6e80860900a2a9ba9ceaeb6a787c29072ba5d410dfd6349fc20a33ed3e5a9" exitCode=0 Jan 30 06:50:12 crc kubenswrapper[4931]: I0130 06:50:12.970842 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" event={"ID":"b05cd1de-6848-4de5-92f4-399913835db3","Type":"ContainerDied","Data":"89d6e80860900a2a9ba9ceaeb6a787c29072ba5d410dfd6349fc20a33ed3e5a9"} Jan 30 06:50:12 crc kubenswrapper[4931]: I0130 06:50:12.980232 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6wq2" event={"ID":"2b12adab-30ea-4122-9ea7-0c8b2fdb117c","Type":"ContainerStarted","Data":"137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1"} Jan 30 06:50:13 crc kubenswrapper[4931]: I0130 06:50:13.056066 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j6wq2" podStartSLOduration=7.405172794 podStartE2EDuration="17.056044989s" podCreationTimestamp="2026-01-30 06:49:56 +0000 UTC" firstStartedPulling="2026-01-30 06:50:02.851409496 +0000 UTC m=+6138.221319763" lastFinishedPulling="2026-01-30 06:50:12.502281701 +0000 UTC m=+6147.872191958" observedRunningTime="2026-01-30 06:50:13.016666541 +0000 UTC m=+6148.386576808" watchObservedRunningTime="2026-01-30 06:50:13.056044989 +0000 UTC m=+6148.425955246" Jan 30 06:50:14 crc kubenswrapper[4931]: I0130 06:50:14.002601 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"b9eb637a-1c6e-47f5-87ec-fa28c244db0b","Type":"ContainerStarted","Data":"fff3cd3da7a1e0565d6c3ab09c75b7c979ce4896ad15ee33669392f300594aab"} Jan 30 06:50:14 crc kubenswrapper[4931]: I0130 06:50:14.003182 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"b9eb637a-1c6e-47f5-87ec-fa28c244db0b","Type":"ContainerStarted","Data":"bf40182a00506ac472df5b8e5486b83c229d757049edd269d9747aa93052e605"} Jan 30 06:50:14 crc kubenswrapper[4931]: I0130 06:50:14.007078 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"50c75c49-3fc8-4f3e-9af2-66535e3b49a9","Type":"ContainerStarted","Data":"3a0fdc2feb00fcee5361f31e8874e980b3b4cc3670d04aa01f305230ca363abb"} Jan 30 06:50:14 crc kubenswrapper[4931]: I0130 06:50:14.014016 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" event={"ID":"b05cd1de-6848-4de5-92f4-399913835db3","Type":"ContainerStarted","Data":"d0320521dc403d998ca155278a7f3957b38bcbd67acc9239576b5a083b9a2b3c"} Jan 30 06:50:14 crc kubenswrapper[4931]: I0130 06:50:14.014241 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:14 crc kubenswrapper[4931]: I0130 06:50:14.046587 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.060225118 podStartE2EDuration="4.046565082s" podCreationTimestamp="2026-01-30 06:50:10 +0000 UTC" firstStartedPulling="2026-01-30 06:50:11.531383802 +0000 UTC m=+6146.901294059" lastFinishedPulling="2026-01-30 06:50:12.517723756 +0000 UTC m=+6147.887634023" observedRunningTime="2026-01-30 06:50:14.024019567 +0000 UTC m=+6149.393929824" watchObservedRunningTime="2026-01-30 06:50:14.046565082 +0000 UTC m=+6149.416475339" Jan 30 06:50:14 crc kubenswrapper[4931]: I0130 06:50:14.054283 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" podStartSLOduration=4.054267579 podStartE2EDuration="4.054267579s" podCreationTimestamp="2026-01-30 06:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:50:14.043075854 +0000 UTC m=+6149.412986111" watchObservedRunningTime="2026-01-30 06:50:14.054267579 +0000 UTC m=+6149.424177826" Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.053491 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"50c75c49-3fc8-4f3e-9af2-66535e3b49a9","Type":"ContainerStarted","Data":"23effb781db0029b2a0562728b1467627c21585f5fd6ba1d1c58f8dbbcc65656"} Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.054217 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.105874 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.105858121 podStartE2EDuration="5.105858121s" podCreationTimestamp="2026-01-30 06:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 06:50:15.09697105 +0000 UTC m=+6150.466881317" watchObservedRunningTime="2026-01-30 06:50:15.105858121 +0000 UTC m=+6150.475768378" Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.150173 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.150448 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-central-agent" containerID="cri-o://e6075bddf7d9e844cbcadfacf5366aedab5b6c0869d55610da59bca13b8f2e10" gracePeriod=30 Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.150562 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="proxy-httpd" containerID="cri-o://c0fc86cc25abacab572b554b477bbaff87a06a333e44043d43cc5be409f8a89e" gracePeriod=30 Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.150600 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="sg-core" containerID="cri-o://d6ffd18a5fcda569a257ea582064d96c557e419380bbc5b6e6ffba7dcc0e2a22" gracePeriod=30 Jan 30 06:50:15 crc kubenswrapper[4931]: I0130 06:50:15.150634 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-notification-agent" containerID="cri-o://01bff7fca0dad20c19afe6ca1af5b694418bb46bbfaf5130445564c32e038144" gracePeriod=30 Jan 30 06:50:16 crc kubenswrapper[4931]: I0130 06:50:16.065978 4931 generic.go:334] "Generic (PLEG): container finished" podID="83416e39-1feb-47a7-9e5d-748122bed281" containerID="c0fc86cc25abacab572b554b477bbaff87a06a333e44043d43cc5be409f8a89e" exitCode=0 Jan 30 06:50:16 crc kubenswrapper[4931]: I0130 06:50:16.066326 4931 generic.go:334] "Generic (PLEG): container finished" podID="83416e39-1feb-47a7-9e5d-748122bed281" containerID="d6ffd18a5fcda569a257ea582064d96c557e419380bbc5b6e6ffba7dcc0e2a22" exitCode=2 Jan 30 06:50:16 crc kubenswrapper[4931]: I0130 06:50:16.066337 4931 generic.go:334] "Generic (PLEG): container finished" podID="83416e39-1feb-47a7-9e5d-748122bed281" containerID="e6075bddf7d9e844cbcadfacf5366aedab5b6c0869d55610da59bca13b8f2e10" exitCode=0 Jan 30 06:50:16 crc kubenswrapper[4931]: I0130 06:50:16.066069 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerDied","Data":"c0fc86cc25abacab572b554b477bbaff87a06a333e44043d43cc5be409f8a89e"} Jan 30 06:50:16 crc kubenswrapper[4931]: I0130 06:50:16.066459 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerDied","Data":"d6ffd18a5fcda569a257ea582064d96c557e419380bbc5b6e6ffba7dcc0e2a22"} Jan 30 06:50:16 crc kubenswrapper[4931]: I0130 06:50:16.066476 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerDied","Data":"e6075bddf7d9e844cbcadfacf5366aedab5b6c0869d55610da59bca13b8f2e10"} Jan 30 06:50:17 crc kubenswrapper[4931]: I0130 06:50:17.271912 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:50:17 crc kubenswrapper[4931]: I0130 06:50:17.272461 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:50:18 crc kubenswrapper[4931]: I0130 06:50:18.090948 4931 generic.go:334] "Generic (PLEG): container finished" podID="83416e39-1feb-47a7-9e5d-748122bed281" containerID="01bff7fca0dad20c19afe6ca1af5b694418bb46bbfaf5130445564c32e038144" exitCode=0 Jan 30 06:50:18 crc kubenswrapper[4931]: I0130 06:50:18.091179 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerDied","Data":"01bff7fca0dad20c19afe6ca1af5b694418bb46bbfaf5130445564c32e038144"} Jan 30 06:50:18 crc kubenswrapper[4931]: I0130 06:50:18.321137 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6wq2" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="registry-server" probeResult="failure" output=< Jan 30 06:50:18 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 06:50:18 crc kubenswrapper[4931]: > Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.191104 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.261469 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-run-httpd\") pod \"83416e39-1feb-47a7-9e5d-748122bed281\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.261546 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-scripts\") pod \"83416e39-1feb-47a7-9e5d-748122bed281\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.261711 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-sg-core-conf-yaml\") pod \"83416e39-1feb-47a7-9e5d-748122bed281\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.261741 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-combined-ca-bundle\") pod \"83416e39-1feb-47a7-9e5d-748122bed281\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.261821 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-log-httpd\") pod \"83416e39-1feb-47a7-9e5d-748122bed281\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.262001 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqxwx\" (UniqueName: \"kubernetes.io/projected/83416e39-1feb-47a7-9e5d-748122bed281-kube-api-access-pqxwx\") pod \"83416e39-1feb-47a7-9e5d-748122bed281\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.262188 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-config-data\") pod \"83416e39-1feb-47a7-9e5d-748122bed281\" (UID: \"83416e39-1feb-47a7-9e5d-748122bed281\") " Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.262352 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "83416e39-1feb-47a7-9e5d-748122bed281" (UID: "83416e39-1feb-47a7-9e5d-748122bed281"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.262382 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "83416e39-1feb-47a7-9e5d-748122bed281" (UID: "83416e39-1feb-47a7-9e5d-748122bed281"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.263448 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.263473 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83416e39-1feb-47a7-9e5d-748122bed281-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.266752 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83416e39-1feb-47a7-9e5d-748122bed281-kube-api-access-pqxwx" (OuterVolumeSpecName: "kube-api-access-pqxwx") pod "83416e39-1feb-47a7-9e5d-748122bed281" (UID: "83416e39-1feb-47a7-9e5d-748122bed281"). InnerVolumeSpecName "kube-api-access-pqxwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.267235 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-scripts" (OuterVolumeSpecName: "scripts") pod "83416e39-1feb-47a7-9e5d-748122bed281" (UID: "83416e39-1feb-47a7-9e5d-748122bed281"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.297645 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "83416e39-1feb-47a7-9e5d-748122bed281" (UID: "83416e39-1feb-47a7-9e5d-748122bed281"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.360530 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83416e39-1feb-47a7-9e5d-748122bed281" (UID: "83416e39-1feb-47a7-9e5d-748122bed281"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.365665 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.365691 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.365702 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.365710 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqxwx\" (UniqueName: \"kubernetes.io/projected/83416e39-1feb-47a7-9e5d-748122bed281-kube-api-access-pqxwx\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.397298 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-config-data" (OuterVolumeSpecName: "config-data") pod "83416e39-1feb-47a7-9e5d-748122bed281" (UID: "83416e39-1feb-47a7-9e5d-748122bed281"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:19 crc kubenswrapper[4931]: I0130 06:50:19.467858 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83416e39-1feb-47a7-9e5d-748122bed281-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.125024 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70","Type":"ContainerStarted","Data":"23733cadfa5f645805d1ead2ddee0cb62565699bfce6560a00cd522cf2fa23f2"} Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.125524 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70","Type":"ContainerStarted","Data":"184f4f07cc92de22f89c14c90a2d5c84bf1ec61f1a94772da5110b4c53f3590c"} Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.141643 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"83416e39-1feb-47a7-9e5d-748122bed281","Type":"ContainerDied","Data":"c52383b65f03eb04f4d7782d65ea62f04f107a518cccdbc33d3de6808caeee13"} Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.141697 4931 scope.go:117] "RemoveContainer" containerID="c0fc86cc25abacab572b554b477bbaff87a06a333e44043d43cc5be409f8a89e" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.141820 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.161859 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.019763289 podStartE2EDuration="10.161840762s" podCreationTimestamp="2026-01-30 06:50:10 +0000 UTC" firstStartedPulling="2026-01-30 06:50:11.786831483 +0000 UTC m=+6147.156741750" lastFinishedPulling="2026-01-30 06:50:18.928908966 +0000 UTC m=+6154.298819223" observedRunningTime="2026-01-30 06:50:20.15929098 +0000 UTC m=+6155.529201247" watchObservedRunningTime="2026-01-30 06:50:20.161840762 +0000 UTC m=+6155.531751029" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.174478 4931 scope.go:117] "RemoveContainer" containerID="d6ffd18a5fcda569a257ea582064d96c557e419380bbc5b6e6ffba7dcc0e2a22" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.193013 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.213568 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.216391 4931 scope.go:117] "RemoveContainer" containerID="01bff7fca0dad20c19afe6ca1af5b694418bb46bbfaf5130445564c32e038144" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.222223 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:20 crc kubenswrapper[4931]: E0130 06:50:20.222749 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-notification-agent" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.222772 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-notification-agent" Jan 30 06:50:20 crc kubenswrapper[4931]: E0130 06:50:20.222803 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-central-agent" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.222812 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-central-agent" Jan 30 06:50:20 crc kubenswrapper[4931]: E0130 06:50:20.222832 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="proxy-httpd" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.222839 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="proxy-httpd" Jan 30 06:50:20 crc kubenswrapper[4931]: E0130 06:50:20.222861 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="sg-core" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.222887 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="sg-core" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.223116 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="sg-core" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.223142 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-notification-agent" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.223158 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="ceilometer-central-agent" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.223179 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="83416e39-1feb-47a7-9e5d-748122bed281" containerName="proxy-httpd" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.225956 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.230868 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.231128 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.232026 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.263969 4931 scope.go:117] "RemoveContainer" containerID="e6075bddf7d9e844cbcadfacf5366aedab5b6c0869d55610da59bca13b8f2e10" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.283593 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-config-data\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.283661 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-scripts\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.283822 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.283905 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjgq\" (UniqueName: \"kubernetes.io/projected/57fae8dd-3d28-4bc8-b8f2-667d583a4931-kube-api-access-qxjgq\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.284172 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.284464 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-log-httpd\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.284532 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-run-httpd\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.386377 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-scripts\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.386454 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.386483 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjgq\" (UniqueName: \"kubernetes.io/projected/57fae8dd-3d28-4bc8-b8f2-667d583a4931-kube-api-access-qxjgq\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.386549 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.386612 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-log-httpd\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.386632 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-run-httpd\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.386684 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-config-data\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.387461 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-log-httpd\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.387686 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-run-httpd\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.395162 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.395509 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-config-data\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.398063 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-scripts\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.411291 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.422284 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjgq\" (UniqueName: \"kubernetes.io/projected/57fae8dd-3d28-4bc8-b8f2-667d583a4931-kube-api-access-qxjgq\") pod \"ceilometer-0\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.556728 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.907212 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.925082 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.938631 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59977785bf-q4vw9" Jan 30 06:50:20 crc kubenswrapper[4931]: I0130 06:50:20.946354 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 30 06:50:21 crc kubenswrapper[4931]: I0130 06:50:21.023270 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5879d4f7c5-x7dw2"] Jan 30 06:50:21 crc kubenswrapper[4931]: I0130 06:50:21.023703 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" containerName="dnsmasq-dns" containerID="cri-o://1b21e024974dbffc1b686bd5f52316fe76f211ede1a6fa05295886b31dbd35b5" gracePeriod=10 Jan 30 06:50:21 crc kubenswrapper[4931]: I0130 06:50:21.120656 4931 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.82:5353: connect: connection refused" Jan 30 06:50:21 crc kubenswrapper[4931]: I0130 06:50:21.159743 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerStarted","Data":"29cb71fe5f366a9385aa94ca5e212e47a7ecc8bdc0bac6a88eee5b6e47ca58de"} Jan 30 06:50:21 crc kubenswrapper[4931]: I0130 06:50:21.461967 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83416e39-1feb-47a7-9e5d-748122bed281" path="/var/lib/kubelet/pods/83416e39-1feb-47a7-9e5d-748122bed281/volumes" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.173307 4931 generic.go:334] "Generic (PLEG): container finished" podID="214b78b9-e769-4474-be87-e9b494c2fa69" containerID="1b21e024974dbffc1b686bd5f52316fe76f211ede1a6fa05295886b31dbd35b5" exitCode=0 Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.173354 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" event={"ID":"214b78b9-e769-4474-be87-e9b494c2fa69","Type":"ContainerDied","Data":"1b21e024974dbffc1b686bd5f52316fe76f211ede1a6fa05295886b31dbd35b5"} Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.806702 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.838978 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn9tk\" (UniqueName: \"kubernetes.io/projected/214b78b9-e769-4474-be87-e9b494c2fa69-kube-api-access-pn9tk\") pod \"214b78b9-e769-4474-be87-e9b494c2fa69\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.839047 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-dns-svc\") pod \"214b78b9-e769-4474-be87-e9b494c2fa69\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.839077 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-config\") pod \"214b78b9-e769-4474-be87-e9b494c2fa69\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.839172 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-nb\") pod \"214b78b9-e769-4474-be87-e9b494c2fa69\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.839240 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-sb\") pod \"214b78b9-e769-4474-be87-e9b494c2fa69\" (UID: \"214b78b9-e769-4474-be87-e9b494c2fa69\") " Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.844100 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214b78b9-e769-4474-be87-e9b494c2fa69-kube-api-access-pn9tk" (OuterVolumeSpecName: "kube-api-access-pn9tk") pod "214b78b9-e769-4474-be87-e9b494c2fa69" (UID: "214b78b9-e769-4474-be87-e9b494c2fa69"). InnerVolumeSpecName "kube-api-access-pn9tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.893534 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "214b78b9-e769-4474-be87-e9b494c2fa69" (UID: "214b78b9-e769-4474-be87-e9b494c2fa69"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.926325 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "214b78b9-e769-4474-be87-e9b494c2fa69" (UID: "214b78b9-e769-4474-be87-e9b494c2fa69"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.930026 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-config" (OuterVolumeSpecName: "config") pod "214b78b9-e769-4474-be87-e9b494c2fa69" (UID: "214b78b9-e769-4474-be87-e9b494c2fa69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.937014 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "214b78b9-e769-4474-be87-e9b494c2fa69" (UID: "214b78b9-e769-4474-be87-e9b494c2fa69"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.941855 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.941889 4931 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.941989 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn9tk\" (UniqueName: \"kubernetes.io/projected/214b78b9-e769-4474-be87-e9b494c2fa69-kube-api-access-pn9tk\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.942024 4931 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:22 crc kubenswrapper[4931]: I0130 06:50:22.942033 4931 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/214b78b9-e769-4474-be87-e9b494c2fa69-config\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.186782 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" event={"ID":"214b78b9-e769-4474-be87-e9b494c2fa69","Type":"ContainerDied","Data":"1cea5345d991653f1a6830de732c35c4b2f81ed6821f46e956ec8f3a43e28720"} Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.186888 4931 scope.go:117] "RemoveContainer" containerID="1b21e024974dbffc1b686bd5f52316fe76f211ede1a6fa05295886b31dbd35b5" Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.187009 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5879d4f7c5-x7dw2" Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.192704 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerStarted","Data":"197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914"} Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.220963 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5879d4f7c5-x7dw2"] Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.226636 4931 scope.go:117] "RemoveContainer" containerID="45cf3829eaba7efc9ffdbde5fa46c91facdbe555edf8963708f266596e0113d9" Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.228244 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5879d4f7c5-x7dw2"] Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.332996 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:23 crc kubenswrapper[4931]: I0130 06:50:23.436099 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" path="/var/lib/kubelet/pods/214b78b9-e769-4474-be87-e9b494c2fa69/volumes" Jan 30 06:50:24 crc kubenswrapper[4931]: I0130 06:50:24.207146 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerStarted","Data":"bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631"} Jan 30 06:50:24 crc kubenswrapper[4931]: I0130 06:50:24.207536 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerStarted","Data":"54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e"} Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.245180 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerStarted","Data":"9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25"} Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.245616 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-central-agent" containerID="cri-o://197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914" gracePeriod=30 Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.245666 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.245719 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="proxy-httpd" containerID="cri-o://9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25" gracePeriod=30 Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.245765 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="sg-core" containerID="cri-o://bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631" gracePeriod=30 Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.245806 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-notification-agent" containerID="cri-o://54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e" gracePeriod=30 Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.362805 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.362870 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.362921 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.363713 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb726f99bee299533a0b10daf42c4d3c80f89e1b2459842bb36e1df7a3f9faa7"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:50:27 crc kubenswrapper[4931]: I0130 06:50:27.363769 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://fb726f99bee299533a0b10daf42c4d3c80f89e1b2459842bb36e1df7a3f9faa7" gracePeriod=600 Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.264099 4931 generic.go:334] "Generic (PLEG): container finished" podID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerID="9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25" exitCode=0 Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.264542 4931 generic.go:334] "Generic (PLEG): container finished" podID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerID="bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631" exitCode=2 Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.264551 4931 generic.go:334] "Generic (PLEG): container finished" podID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerID="54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e" exitCode=0 Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.264165 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerDied","Data":"9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25"} Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.264620 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerDied","Data":"bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631"} Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.264635 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerDied","Data":"54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e"} Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.268789 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="fb726f99bee299533a0b10daf42c4d3c80f89e1b2459842bb36e1df7a3f9faa7" exitCode=0 Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.268818 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"fb726f99bee299533a0b10daf42c4d3c80f89e1b2459842bb36e1df7a3f9faa7"} Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.268883 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d"} Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.268914 4931 scope.go:117] "RemoveContainer" containerID="981373dfbb9d9692d805ba5950bbc2a4aba00fefca1f782507816d2c2ba0f7f7" Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.310955 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.299934937 podStartE2EDuration="8.310929413s" podCreationTimestamp="2026-01-30 06:50:20 +0000 UTC" firstStartedPulling="2026-01-30 06:50:20.921874697 +0000 UTC m=+6156.291784954" lastFinishedPulling="2026-01-30 06:50:25.932869143 +0000 UTC m=+6161.302779430" observedRunningTime="2026-01-30 06:50:27.286950318 +0000 UTC m=+6162.656860585" watchObservedRunningTime="2026-01-30 06:50:28.310929413 +0000 UTC m=+6163.680839690" Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.363248 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6wq2" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="registry-server" probeResult="failure" output=< Jan 30 06:50:28 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 06:50:28 crc kubenswrapper[4931]: > Jan 30 06:50:28 crc kubenswrapper[4931]: I0130 06:50:28.927310 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.001766 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxjgq\" (UniqueName: \"kubernetes.io/projected/57fae8dd-3d28-4bc8-b8f2-667d583a4931-kube-api-access-qxjgq\") pod \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.001833 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-run-httpd\") pod \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.001899 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-combined-ca-bundle\") pod \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.001977 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-scripts\") pod \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.002017 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-sg-core-conf-yaml\") pod \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.002072 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-config-data\") pod \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.002217 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-log-httpd\") pod \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\" (UID: \"57fae8dd-3d28-4bc8-b8f2-667d583a4931\") " Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.002505 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "57fae8dd-3d28-4bc8-b8f2-667d583a4931" (UID: "57fae8dd-3d28-4bc8-b8f2-667d583a4931"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.002801 4931 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.003744 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "57fae8dd-3d28-4bc8-b8f2-667d583a4931" (UID: "57fae8dd-3d28-4bc8-b8f2-667d583a4931"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.011981 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-scripts" (OuterVolumeSpecName: "scripts") pod "57fae8dd-3d28-4bc8-b8f2-667d583a4931" (UID: "57fae8dd-3d28-4bc8-b8f2-667d583a4931"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.012064 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fae8dd-3d28-4bc8-b8f2-667d583a4931-kube-api-access-qxjgq" (OuterVolumeSpecName: "kube-api-access-qxjgq") pod "57fae8dd-3d28-4bc8-b8f2-667d583a4931" (UID: "57fae8dd-3d28-4bc8-b8f2-667d583a4931"). InnerVolumeSpecName "kube-api-access-qxjgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.036822 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "57fae8dd-3d28-4bc8-b8f2-667d583a4931" (UID: "57fae8dd-3d28-4bc8-b8f2-667d583a4931"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.104307 4931 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57fae8dd-3d28-4bc8-b8f2-667d583a4931-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.104343 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxjgq\" (UniqueName: \"kubernetes.io/projected/57fae8dd-3d28-4bc8-b8f2-667d583a4931-kube-api-access-qxjgq\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.104354 4931 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.104362 4931 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.105132 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57fae8dd-3d28-4bc8-b8f2-667d583a4931" (UID: "57fae8dd-3d28-4bc8-b8f2-667d583a4931"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.125602 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-config-data" (OuterVolumeSpecName: "config-data") pod "57fae8dd-3d28-4bc8-b8f2-667d583a4931" (UID: "57fae8dd-3d28-4bc8-b8f2-667d583a4931"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.206476 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.206704 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fae8dd-3d28-4bc8-b8f2-667d583a4931-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.285302 4931 generic.go:334] "Generic (PLEG): container finished" podID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerID="197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914" exitCode=0 Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.285374 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerDied","Data":"197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914"} Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.285401 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"57fae8dd-3d28-4bc8-b8f2-667d583a4931","Type":"ContainerDied","Data":"29cb71fe5f366a9385aa94ca5e212e47a7ecc8bdc0bac6a88eee5b6e47ca58de"} Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.285417 4931 scope.go:117] "RemoveContainer" containerID="9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.286222 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.313494 4931 scope.go:117] "RemoveContainer" containerID="bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.354715 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.381140 4931 scope.go:117] "RemoveContainer" containerID="54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.384582 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.395720 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.398720 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="sg-core" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.398753 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="sg-core" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.398777 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-central-agent" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.398784 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-central-agent" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.398791 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="proxy-httpd" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.398796 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="proxy-httpd" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.398809 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" containerName="dnsmasq-dns" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.398815 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" containerName="dnsmasq-dns" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.398827 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-notification-agent" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.398832 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-notification-agent" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.398849 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" containerName="init" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.398855 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" containerName="init" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.399169 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-central-agent" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.399181 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="214b78b9-e769-4474-be87-e9b494c2fa69" containerName="dnsmasq-dns" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.399196 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="ceilometer-notification-agent" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.399206 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="proxy-httpd" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.399226 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" containerName="sg-core" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.401359 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.404038 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.404441 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.404818 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.415796 4931 scope.go:117] "RemoveContainer" containerID="197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.441165 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fae8dd-3d28-4bc8-b8f2-667d583a4931" path="/var/lib/kubelet/pods/57fae8dd-3d28-4bc8-b8f2-667d583a4931/volumes" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.443239 4931 scope.go:117] "RemoveContainer" containerID="9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.443630 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25\": container with ID starting with 9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25 not found: ID does not exist" containerID="9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.443668 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25"} err="failed to get container status \"9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25\": rpc error: code = NotFound desc = could not find container \"9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25\": container with ID starting with 9b6b1a571845a37b2c3b7a0e309b0fde565dc4026786cde88a1e3aa91519fd25 not found: ID does not exist" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.443687 4931 scope.go:117] "RemoveContainer" containerID="bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.444053 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631\": container with ID starting with bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631 not found: ID does not exist" containerID="bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.444083 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631"} err="failed to get container status \"bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631\": rpc error: code = NotFound desc = could not find container \"bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631\": container with ID starting with bf82ab3602e67902b4e094a710a1cd379b3bda403032cb0e5c2107422f572631 not found: ID does not exist" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.444099 4931 scope.go:117] "RemoveContainer" containerID="54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.444321 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e\": container with ID starting with 54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e not found: ID does not exist" containerID="54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.444344 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e"} err="failed to get container status \"54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e\": rpc error: code = NotFound desc = could not find container \"54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e\": container with ID starting with 54dbc567a6f51424cbe90715a2274c2a39e9560d24fbfc82467bed1bf7038a3e not found: ID does not exist" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.444362 4931 scope.go:117] "RemoveContainer" containerID="197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914" Jan 30 06:50:29 crc kubenswrapper[4931]: E0130 06:50:29.444650 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914\": container with ID starting with 197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914 not found: ID does not exist" containerID="197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.444671 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914"} err="failed to get container status \"197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914\": rpc error: code = NotFound desc = could not find container \"197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914\": container with ID starting with 197b2e00bf1b62071bfd0ea1789b2cd2e3aa6238e686992fda263b2330b2a914 not found: ID does not exist" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.513658 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-run-httpd\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.513793 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tbbt\" (UniqueName: \"kubernetes.io/projected/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-kube-api-access-5tbbt\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.513841 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-config-data\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.513885 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.513923 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.513949 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-scripts\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.514039 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-log-httpd\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.616026 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tbbt\" (UniqueName: \"kubernetes.io/projected/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-kube-api-access-5tbbt\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.616361 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-config-data\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.616448 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.616498 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.616538 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-scripts\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.616632 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-log-httpd\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.616722 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-run-httpd\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.617703 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-run-httpd\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.618575 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-log-httpd\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.620746 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.620822 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-scripts\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.621518 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-config-data\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.621594 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.636850 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tbbt\" (UniqueName: \"kubernetes.io/projected/d49c4b54-cfb7-4264-a6b8-9ee32cc53de7-kube-api-access-5tbbt\") pod \"ceilometer-0\" (UID: \"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7\") " pod="openstack/ceilometer-0" Jan 30 06:50:29 crc kubenswrapper[4931]: I0130 06:50:29.723740 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 06:50:30 crc kubenswrapper[4931]: I0130 06:50:30.216982 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 06:50:30 crc kubenswrapper[4931]: W0130 06:50:30.223598 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd49c4b54_cfb7_4264_a6b8_9ee32cc53de7.slice/crio-e70ec4a1e75b08ae74bef2c03c30f8fde6723ad39111490f1e1adc5dbac355ad WatchSource:0}: Error finding container e70ec4a1e75b08ae74bef2c03c30f8fde6723ad39111490f1e1adc5dbac355ad: Status 404 returned error can't find the container with id e70ec4a1e75b08ae74bef2c03c30f8fde6723ad39111490f1e1adc5dbac355ad Jan 30 06:50:30 crc kubenswrapper[4931]: I0130 06:50:30.302064 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7","Type":"ContainerStarted","Data":"e70ec4a1e75b08ae74bef2c03c30f8fde6723ad39111490f1e1adc5dbac355ad"} Jan 30 06:50:31 crc kubenswrapper[4931]: I0130 06:50:31.313051 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7","Type":"ContainerStarted","Data":"b349f3523aa5d13b1408a90273ff6dec9f52ed69ae81c38a2c9c2e27d19c77ab"} Jan 30 06:50:32 crc kubenswrapper[4931]: I0130 06:50:32.323532 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7","Type":"ContainerStarted","Data":"a341b96dfe0b6e82654dabec321e7d7a9a8387bbf76d026a7f791d913d883308"} Jan 30 06:50:32 crc kubenswrapper[4931]: I0130 06:50:32.487879 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 30 06:50:32 crc kubenswrapper[4931]: I0130 06:50:32.560090 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 30 06:50:32 crc kubenswrapper[4931]: I0130 06:50:32.565507 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 30 06:50:33 crc kubenswrapper[4931]: I0130 06:50:33.335352 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7","Type":"ContainerStarted","Data":"6b53435422486799234d64567e00201256b9fd0017f0d413b2e249c8fa6e81a6"} Jan 30 06:50:35 crc kubenswrapper[4931]: I0130 06:50:35.364071 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d49c4b54-cfb7-4264-a6b8-9ee32cc53de7","Type":"ContainerStarted","Data":"beea85f311109358d788e6b45d0d72e60fc1353e7e6eb9561f2ba3b164cf7187"} Jan 30 06:50:35 crc kubenswrapper[4931]: I0130 06:50:35.364995 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 06:50:35 crc kubenswrapper[4931]: I0130 06:50:35.390693 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.111099626 podStartE2EDuration="6.390658912s" podCreationTimestamp="2026-01-30 06:50:29 +0000 UTC" firstStartedPulling="2026-01-30 06:50:30.227129842 +0000 UTC m=+6165.597040099" lastFinishedPulling="2026-01-30 06:50:34.506689118 +0000 UTC m=+6169.876599385" observedRunningTime="2026-01-30 06:50:35.38739314 +0000 UTC m=+6170.757303497" watchObservedRunningTime="2026-01-30 06:50:35.390658912 +0000 UTC m=+6170.760569219" Jan 30 06:50:37 crc kubenswrapper[4931]: I0130 06:50:37.324085 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:50:37 crc kubenswrapper[4931]: I0130 06:50:37.374716 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:50:37 crc kubenswrapper[4931]: I0130 06:50:37.579414 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6wq2"] Jan 30 06:50:38 crc kubenswrapper[4931]: I0130 06:50:38.396795 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j6wq2" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="registry-server" containerID="cri-o://137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1" gracePeriod=2 Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.184691 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.243715 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-catalog-content\") pod \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.244113 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-utilities\") pod \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.244503 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngkp9\" (UniqueName: \"kubernetes.io/projected/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-kube-api-access-ngkp9\") pod \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\" (UID: \"2b12adab-30ea-4122-9ea7-0c8b2fdb117c\") " Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.246592 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-utilities" (OuterVolumeSpecName: "utilities") pod "2b12adab-30ea-4122-9ea7-0c8b2fdb117c" (UID: "2b12adab-30ea-4122-9ea7-0c8b2fdb117c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.262999 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-kube-api-access-ngkp9" (OuterVolumeSpecName: "kube-api-access-ngkp9") pod "2b12adab-30ea-4122-9ea7-0c8b2fdb117c" (UID: "2b12adab-30ea-4122-9ea7-0c8b2fdb117c"). InnerVolumeSpecName "kube-api-access-ngkp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.316941 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b12adab-30ea-4122-9ea7-0c8b2fdb117c" (UID: "2b12adab-30ea-4122-9ea7-0c8b2fdb117c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.347599 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngkp9\" (UniqueName: \"kubernetes.io/projected/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-kube-api-access-ngkp9\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.347643 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.347657 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b12adab-30ea-4122-9ea7-0c8b2fdb117c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.415585 4931 generic.go:334] "Generic (PLEG): container finished" podID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerID="137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1" exitCode=0 Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.415634 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6wq2" event={"ID":"2b12adab-30ea-4122-9ea7-0c8b2fdb117c","Type":"ContainerDied","Data":"137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1"} Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.415666 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6wq2" event={"ID":"2b12adab-30ea-4122-9ea7-0c8b2fdb117c","Type":"ContainerDied","Data":"a18b31c9c3fa680e8d93ca8737e2041057cbd5a166876584d210ca053654abaf"} Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.415685 4931 scope.go:117] "RemoveContainer" containerID="137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.416876 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6wq2" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.448106 4931 scope.go:117] "RemoveContainer" containerID="681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.480800 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6wq2"] Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.491027 4931 scope.go:117] "RemoveContainer" containerID="b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.493608 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j6wq2"] Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.539617 4931 scope.go:117] "RemoveContainer" containerID="137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1" Jan 30 06:50:39 crc kubenswrapper[4931]: E0130 06:50:39.540166 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1\": container with ID starting with 137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1 not found: ID does not exist" containerID="137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.540215 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1"} err="failed to get container status \"137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1\": rpc error: code = NotFound desc = could not find container \"137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1\": container with ID starting with 137405c9a2e57ffc07560da2bab8534c0b526dfafb00e20c14c914b0824bcca1 not found: ID does not exist" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.540239 4931 scope.go:117] "RemoveContainer" containerID="681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4" Jan 30 06:50:39 crc kubenswrapper[4931]: E0130 06:50:39.540648 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4\": container with ID starting with 681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4 not found: ID does not exist" containerID="681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.540675 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4"} err="failed to get container status \"681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4\": rpc error: code = NotFound desc = could not find container \"681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4\": container with ID starting with 681b6c299cbe135891d8c8dcaae11331648366ef679f31fa6b02fe2f6fc1b5d4 not found: ID does not exist" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.540690 4931 scope.go:117] "RemoveContainer" containerID="b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d" Jan 30 06:50:39 crc kubenswrapper[4931]: E0130 06:50:39.540975 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d\": container with ID starting with b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d not found: ID does not exist" containerID="b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d" Jan 30 06:50:39 crc kubenswrapper[4931]: I0130 06:50:39.540998 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d"} err="failed to get container status \"b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d\": rpc error: code = NotFound desc = could not find container \"b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d\": container with ID starting with b96dd5bc39ca8e281e96819198e81f51b95d1548ac1ca72b9934c7b258b3531d not found: ID does not exist" Jan 30 06:50:41 crc kubenswrapper[4931]: I0130 06:50:41.435069 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" path="/var/lib/kubelet/pods/2b12adab-30ea-4122-9ea7-0c8b2fdb117c/volumes" Jan 30 06:50:59 crc kubenswrapper[4931]: I0130 06:50:59.738309 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.191581 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57fqf/must-gather-v9ths"] Jan 30 06:52:05 crc kubenswrapper[4931]: E0130 06:52:05.193578 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="extract-content" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.193655 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="extract-content" Jan 30 06:52:05 crc kubenswrapper[4931]: E0130 06:52:05.193728 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="registry-server" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.193784 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="registry-server" Jan 30 06:52:05 crc kubenswrapper[4931]: E0130 06:52:05.194047 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="extract-utilities" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.194101 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="extract-utilities" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.194375 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b12adab-30ea-4122-9ea7-0c8b2fdb117c" containerName="registry-server" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.195568 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.197062 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-57fqf"/"openshift-service-ca.crt" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.197637 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-57fqf"/"default-dockercfg-zrkvq" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.199890 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-57fqf"/"kube-root-ca.crt" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.210981 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-57fqf/must-gather-v9ths"] Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.287025 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1075a448-992c-4364-842b-06dda255cd42-must-gather-output\") pod \"must-gather-v9ths\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.287362 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcnzx\" (UniqueName: \"kubernetes.io/projected/1075a448-992c-4364-842b-06dda255cd42-kube-api-access-wcnzx\") pod \"must-gather-v9ths\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.389916 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcnzx\" (UniqueName: \"kubernetes.io/projected/1075a448-992c-4364-842b-06dda255cd42-kube-api-access-wcnzx\") pod \"must-gather-v9ths\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.390086 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1075a448-992c-4364-842b-06dda255cd42-must-gather-output\") pod \"must-gather-v9ths\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.390553 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1075a448-992c-4364-842b-06dda255cd42-must-gather-output\") pod \"must-gather-v9ths\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.413750 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcnzx\" (UniqueName: \"kubernetes.io/projected/1075a448-992c-4364-842b-06dda255cd42-kube-api-access-wcnzx\") pod \"must-gather-v9ths\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:05 crc kubenswrapper[4931]: I0130 06:52:05.511627 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:52:06 crc kubenswrapper[4931]: I0130 06:52:06.072384 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-57fqf/must-gather-v9ths"] Jan 30 06:52:06 crc kubenswrapper[4931]: I0130 06:52:06.549252 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/must-gather-v9ths" event={"ID":"1075a448-992c-4364-842b-06dda255cd42","Type":"ContainerStarted","Data":"cf436afdc02c26e863dc8ad812f5a2c820057a0e4bd2d7c8832d02ba1d788c94"} Jan 30 06:52:14 crc kubenswrapper[4931]: I0130 06:52:14.648837 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/must-gather-v9ths" event={"ID":"1075a448-992c-4364-842b-06dda255cd42","Type":"ContainerStarted","Data":"f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92"} Jan 30 06:52:15 crc kubenswrapper[4931]: I0130 06:52:15.662947 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/must-gather-v9ths" event={"ID":"1075a448-992c-4364-842b-06dda255cd42","Type":"ContainerStarted","Data":"b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942"} Jan 30 06:52:15 crc kubenswrapper[4931]: I0130 06:52:15.688985 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-57fqf/must-gather-v9ths" podStartSLOduration=2.670147832 podStartE2EDuration="10.688950865s" podCreationTimestamp="2026-01-30 06:52:05 +0000 UTC" firstStartedPulling="2026-01-30 06:52:06.072704525 +0000 UTC m=+6261.442614782" lastFinishedPulling="2026-01-30 06:52:14.091507558 +0000 UTC m=+6269.461417815" observedRunningTime="2026-01-30 06:52:15.683600354 +0000 UTC m=+6271.053510671" watchObservedRunningTime="2026-01-30 06:52:15.688950865 +0000 UTC m=+6271.058861162" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.566474 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57fqf/crc-debug-k2m6k"] Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.568281 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.689487 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv6z7\" (UniqueName: \"kubernetes.io/projected/57350cb0-488c-4df8-808a-a9327d16816d-kube-api-access-dv6z7\") pod \"crc-debug-k2m6k\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.690031 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57350cb0-488c-4df8-808a-a9327d16816d-host\") pod \"crc-debug-k2m6k\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.791404 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57350cb0-488c-4df8-808a-a9327d16816d-host\") pod \"crc-debug-k2m6k\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.791476 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57350cb0-488c-4df8-808a-a9327d16816d-host\") pod \"crc-debug-k2m6k\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.791591 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv6z7\" (UniqueName: \"kubernetes.io/projected/57350cb0-488c-4df8-808a-a9327d16816d-kube-api-access-dv6z7\") pod \"crc-debug-k2m6k\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.813594 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv6z7\" (UniqueName: \"kubernetes.io/projected/57350cb0-488c-4df8-808a-a9327d16816d-kube-api-access-dv6z7\") pod \"crc-debug-k2m6k\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: I0130 06:52:18.884058 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:18 crc kubenswrapper[4931]: W0130 06:52:18.928929 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57350cb0_488c_4df8_808a_a9327d16816d.slice/crio-99e60a1f5ea9c470502256a828f0ebd37f1595ad3a4fa47fcd3728c61b6ab807 WatchSource:0}: Error finding container 99e60a1f5ea9c470502256a828f0ebd37f1595ad3a4fa47fcd3728c61b6ab807: Status 404 returned error can't find the container with id 99e60a1f5ea9c470502256a828f0ebd37f1595ad3a4fa47fcd3728c61b6ab807 Jan 30 06:52:19 crc kubenswrapper[4931]: I0130 06:52:19.761321 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" event={"ID":"57350cb0-488c-4df8-808a-a9327d16816d","Type":"ContainerStarted","Data":"99e60a1f5ea9c470502256a828f0ebd37f1595ad3a4fa47fcd3728c61b6ab807"} Jan 30 06:52:27 crc kubenswrapper[4931]: I0130 06:52:27.363035 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:52:27 crc kubenswrapper[4931]: I0130 06:52:27.363685 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:52:32 crc kubenswrapper[4931]: I0130 06:52:32.895449 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" event={"ID":"57350cb0-488c-4df8-808a-a9327d16816d","Type":"ContainerStarted","Data":"a1eade7d298ab8964a28bb2ee51b88b39f3ffa30229e7d9814ffc7f1e58b96ec"} Jan 30 06:52:32 crc kubenswrapper[4931]: I0130 06:52:32.909180 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" podStartSLOduration=1.8654467609999998 podStartE2EDuration="14.909159561s" podCreationTimestamp="2026-01-30 06:52:18 +0000 UTC" firstStartedPulling="2026-01-30 06:52:18.930858942 +0000 UTC m=+6274.300769209" lastFinishedPulling="2026-01-30 06:52:31.974571732 +0000 UTC m=+6287.344482009" observedRunningTime="2026-01-30 06:52:32.906757773 +0000 UTC m=+6288.276668030" watchObservedRunningTime="2026-01-30 06:52:32.909159561 +0000 UTC m=+6288.279069838" Jan 30 06:52:39 crc kubenswrapper[4931]: I0130 06:52:39.052619 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-5v2g5"] Jan 30 06:52:39 crc kubenswrapper[4931]: I0130 06:52:39.063127 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-5v2g5"] Jan 30 06:52:39 crc kubenswrapper[4931]: I0130 06:52:39.437006 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031" path="/var/lib/kubelet/pods/cfb96a2d-1dc4-45d0-9dd2-b2a0bda9a031/volumes" Jan 30 06:52:40 crc kubenswrapper[4931]: I0130 06:52:40.270832 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db99-account-create-update-8dpb5"] Jan 30 06:52:40 crc kubenswrapper[4931]: I0130 06:52:40.284737 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db99-account-create-update-8dpb5"] Jan 30 06:52:41 crc kubenswrapper[4931]: I0130 06:52:41.630764 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21268b76-c5b2-457f-a433-ff2da3b9bd10" path="/var/lib/kubelet/pods/21268b76-c5b2-457f-a433-ff2da3b9bd10/volumes" Jan 30 06:52:46 crc kubenswrapper[4931]: I0130 06:52:46.046933 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-scftb"] Jan 30 06:52:46 crc kubenswrapper[4931]: I0130 06:52:46.066269 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-scftb"] Jan 30 06:52:47 crc kubenswrapper[4931]: I0130 06:52:47.036126 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-06bb-account-create-update-9w2dc"] Jan 30 06:52:47 crc kubenswrapper[4931]: I0130 06:52:47.051940 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-06bb-account-create-update-9w2dc"] Jan 30 06:52:47 crc kubenswrapper[4931]: I0130 06:52:47.513954 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c32b952-ea20-4e38-be3d-0ca833fb8aaf" path="/var/lib/kubelet/pods/7c32b952-ea20-4e38-be3d-0ca833fb8aaf/volumes" Jan 30 06:52:47 crc kubenswrapper[4931]: I0130 06:52:47.515769 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cffbc623-924e-4952-890f-da78398d60fb" path="/var/lib/kubelet/pods/cffbc623-924e-4952-890f-da78398d60fb/volumes" Jan 30 06:52:57 crc kubenswrapper[4931]: I0130 06:52:57.156348 4931 generic.go:334] "Generic (PLEG): container finished" podID="57350cb0-488c-4df8-808a-a9327d16816d" containerID="a1eade7d298ab8964a28bb2ee51b88b39f3ffa30229e7d9814ffc7f1e58b96ec" exitCode=0 Jan 30 06:52:57 crc kubenswrapper[4931]: I0130 06:52:57.156759 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" event={"ID":"57350cb0-488c-4df8-808a-a9327d16816d","Type":"ContainerDied","Data":"a1eade7d298ab8964a28bb2ee51b88b39f3ffa30229e7d9814ffc7f1e58b96ec"} Jan 30 06:52:57 crc kubenswrapper[4931]: I0130 06:52:57.362537 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:52:57 crc kubenswrapper[4931]: I0130 06:52:57.362595 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.321713 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.365654 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57fqf/crc-debug-k2m6k"] Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.393204 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57fqf/crc-debug-k2m6k"] Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.428938 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57350cb0-488c-4df8-808a-a9327d16816d-host\") pod \"57350cb0-488c-4df8-808a-a9327d16816d\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.429104 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57350cb0-488c-4df8-808a-a9327d16816d-host" (OuterVolumeSpecName: "host") pod "57350cb0-488c-4df8-808a-a9327d16816d" (UID: "57350cb0-488c-4df8-808a-a9327d16816d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.429120 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv6z7\" (UniqueName: \"kubernetes.io/projected/57350cb0-488c-4df8-808a-a9327d16816d-kube-api-access-dv6z7\") pod \"57350cb0-488c-4df8-808a-a9327d16816d\" (UID: \"57350cb0-488c-4df8-808a-a9327d16816d\") " Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.432698 4931 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57350cb0-488c-4df8-808a-a9327d16816d-host\") on node \"crc\" DevicePath \"\"" Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.447688 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57350cb0-488c-4df8-808a-a9327d16816d-kube-api-access-dv6z7" (OuterVolumeSpecName: "kube-api-access-dv6z7") pod "57350cb0-488c-4df8-808a-a9327d16816d" (UID: "57350cb0-488c-4df8-808a-a9327d16816d"). InnerVolumeSpecName "kube-api-access-dv6z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:52:58 crc kubenswrapper[4931]: I0130 06:52:58.534932 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv6z7\" (UniqueName: \"kubernetes.io/projected/57350cb0-488c-4df8-808a-a9327d16816d-kube-api-access-dv6z7\") on node \"crc\" DevicePath \"\"" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.179866 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e60a1f5ea9c470502256a828f0ebd37f1595ad3a4fa47fcd3728c61b6ab807" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.179918 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-k2m6k" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.436099 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57350cb0-488c-4df8-808a-a9327d16816d" path="/var/lib/kubelet/pods/57350cb0-488c-4df8-808a-a9327d16816d/volumes" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.620170 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-57fqf/crc-debug-fgbqg"] Jan 30 06:52:59 crc kubenswrapper[4931]: E0130 06:52:59.621980 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57350cb0-488c-4df8-808a-a9327d16816d" containerName="container-00" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.622049 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="57350cb0-488c-4df8-808a-a9327d16816d" containerName="container-00" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.622303 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="57350cb0-488c-4df8-808a-a9327d16816d" containerName="container-00" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.623094 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.655262 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-host\") pod \"crc-debug-fgbqg\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.655313 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzgg\" (UniqueName: \"kubernetes.io/projected/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-kube-api-access-5nzgg\") pod \"crc-debug-fgbqg\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.757493 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-host\") pod \"crc-debug-fgbqg\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.757538 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nzgg\" (UniqueName: \"kubernetes.io/projected/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-kube-api-access-5nzgg\") pod \"crc-debug-fgbqg\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.757679 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-host\") pod \"crc-debug-fgbqg\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.774107 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nzgg\" (UniqueName: \"kubernetes.io/projected/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-kube-api-access-5nzgg\") pod \"crc-debug-fgbqg\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:52:59 crc kubenswrapper[4931]: I0130 06:52:59.940957 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:53:00 crc kubenswrapper[4931]: I0130 06:53:00.188878 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/crc-debug-fgbqg" event={"ID":"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d","Type":"ContainerStarted","Data":"433b0aa6d9f40e4077322d1e7e4879e667697fc4e1ed460f9229365cd34aa89e"} Jan 30 06:53:01 crc kubenswrapper[4931]: I0130 06:53:01.198791 4931 generic.go:334] "Generic (PLEG): container finished" podID="4472ceea-bcfb-49b8-8c2c-60d6dbeae07d" containerID="e650a4dea4b019515b8b62daf128b41c4a1b5ee09e0661b5a7e0c21439b0ecca" exitCode=1 Jan 30 06:53:01 crc kubenswrapper[4931]: I0130 06:53:01.198857 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/crc-debug-fgbqg" event={"ID":"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d","Type":"ContainerDied","Data":"e650a4dea4b019515b8b62daf128b41c4a1b5ee09e0661b5a7e0c21439b0ecca"} Jan 30 06:53:01 crc kubenswrapper[4931]: I0130 06:53:01.238962 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57fqf/crc-debug-fgbqg"] Jan 30 06:53:01 crc kubenswrapper[4931]: I0130 06:53:01.249959 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57fqf/crc-debug-fgbqg"] Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.333769 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.409523 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-host\") pod \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.409774 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nzgg\" (UniqueName: \"kubernetes.io/projected/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-kube-api-access-5nzgg\") pod \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\" (UID: \"4472ceea-bcfb-49b8-8c2c-60d6dbeae07d\") " Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.409885 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-host" (OuterVolumeSpecName: "host") pod "4472ceea-bcfb-49b8-8c2c-60d6dbeae07d" (UID: "4472ceea-bcfb-49b8-8c2c-60d6dbeae07d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.410368 4931 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-host\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.418609 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-kube-api-access-5nzgg" (OuterVolumeSpecName: "kube-api-access-5nzgg") pod "4472ceea-bcfb-49b8-8c2c-60d6dbeae07d" (UID: "4472ceea-bcfb-49b8-8c2c-60d6dbeae07d"). InnerVolumeSpecName "kube-api-access-5nzgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.515231 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nzgg\" (UniqueName: \"kubernetes.io/projected/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d-kube-api-access-5nzgg\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.958681 4931 scope.go:117] "RemoveContainer" containerID="e4d54f2baf7c89b6f0656a4f82e037b9be01132659b5c11c20729732cbeb53c0" Jan 30 06:53:02 crc kubenswrapper[4931]: I0130 06:53:02.991027 4931 scope.go:117] "RemoveContainer" containerID="fc2609bf05101f454b500a411b2af7bec596ed7b2a503264f64b371462ed10d1" Jan 30 06:53:03 crc kubenswrapper[4931]: I0130 06:53:03.047677 4931 scope.go:117] "RemoveContainer" containerID="530188cbe277ff66c92e88116244fb7e483ee43f7966980292a116725f942bcd" Jan 30 06:53:03 crc kubenswrapper[4931]: I0130 06:53:03.095177 4931 scope.go:117] "RemoveContainer" containerID="e1e6f3089c65c6555f71402d2087406c6daee75232130f6c5b8762d180358f01" Jan 30 06:53:03 crc kubenswrapper[4931]: I0130 06:53:03.222291 4931 scope.go:117] "RemoveContainer" containerID="e650a4dea4b019515b8b62daf128b41c4a1b5ee09e0661b5a7e0c21439b0ecca" Jan 30 06:53:03 crc kubenswrapper[4931]: I0130 06:53:03.222432 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/crc-debug-fgbqg" Jan 30 06:53:03 crc kubenswrapper[4931]: I0130 06:53:03.434562 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4472ceea-bcfb-49b8-8c2c-60d6dbeae07d" path="/var/lib/kubelet/pods/4472ceea-bcfb-49b8-8c2c-60d6dbeae07d/volumes" Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.066208 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-2clsb"] Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.077847 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-2clsb"] Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.363179 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.363237 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.363278 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.364190 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.364249 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" gracePeriod=600 Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.453630 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb" path="/var/lib/kubelet/pods/6c7503f1-c8e7-4b48-9dad-a4a221ebdbbb/volumes" Jan 30 06:53:27 crc kubenswrapper[4931]: E0130 06:53:27.484325 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.498580 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" exitCode=0 Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.498627 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d"} Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.498661 4931 scope.go:117] "RemoveContainer" containerID="fb726f99bee299533a0b10daf42c4d3c80f89e1b2459842bb36e1df7a3f9faa7" Jan 30 06:53:27 crc kubenswrapper[4931]: I0130 06:53:27.499365 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:53:27 crc kubenswrapper[4931]: E0130 06:53:27.499627 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.040155 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2d95k"] Jan 30 06:53:35 crc kubenswrapper[4931]: E0130 06:53:35.041768 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4472ceea-bcfb-49b8-8c2c-60d6dbeae07d" containerName="container-00" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.041794 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="4472ceea-bcfb-49b8-8c2c-60d6dbeae07d" containerName="container-00" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.042206 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="4472ceea-bcfb-49b8-8c2c-60d6dbeae07d" containerName="container-00" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.045118 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.086549 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2d95k"] Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.167960 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-catalog-content\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.168146 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jlm5\" (UniqueName: \"kubernetes.io/projected/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-kube-api-access-8jlm5\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.168709 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-utilities\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.270213 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-catalog-content\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.270306 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jlm5\" (UniqueName: \"kubernetes.io/projected/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-kube-api-access-8jlm5\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.270391 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-utilities\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.270845 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-catalog-content\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.270876 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-utilities\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.297770 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jlm5\" (UniqueName: \"kubernetes.io/projected/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-kube-api-access-8jlm5\") pod \"redhat-marketplace-2d95k\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.384949 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:35 crc kubenswrapper[4931]: I0130 06:53:35.904375 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2d95k"] Jan 30 06:53:36 crc kubenswrapper[4931]: I0130 06:53:36.610634 4931 generic.go:334] "Generic (PLEG): container finished" podID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerID="968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218" exitCode=0 Jan 30 06:53:36 crc kubenswrapper[4931]: I0130 06:53:36.611044 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2d95k" event={"ID":"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf","Type":"ContainerDied","Data":"968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218"} Jan 30 06:53:36 crc kubenswrapper[4931]: I0130 06:53:36.611126 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2d95k" event={"ID":"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf","Type":"ContainerStarted","Data":"833fbe327aa6d698810c9ced1db040fd1f7ae267535aeb95838f24816cb7de3f"} Jan 30 06:53:37 crc kubenswrapper[4931]: I0130 06:53:37.628106 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2d95k" event={"ID":"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf","Type":"ContainerStarted","Data":"0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe"} Jan 30 06:53:38 crc kubenswrapper[4931]: I0130 06:53:38.644400 4931 generic.go:334] "Generic (PLEG): container finished" podID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerID="0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe" exitCode=0 Jan 30 06:53:38 crc kubenswrapper[4931]: I0130 06:53:38.644512 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2d95k" event={"ID":"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf","Type":"ContainerDied","Data":"0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe"} Jan 30 06:53:39 crc kubenswrapper[4931]: I0130 06:53:39.425240 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:53:39 crc kubenswrapper[4931]: E0130 06:53:39.426322 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:53:39 crc kubenswrapper[4931]: I0130 06:53:39.660703 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2d95k" event={"ID":"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf","Type":"ContainerStarted","Data":"501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6"} Jan 30 06:53:39 crc kubenswrapper[4931]: I0130 06:53:39.699623 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2d95k" podStartSLOduration=2.22668543 podStartE2EDuration="4.699595311s" podCreationTimestamp="2026-01-30 06:53:35 +0000 UTC" firstStartedPulling="2026-01-30 06:53:36.617215743 +0000 UTC m=+6351.987126040" lastFinishedPulling="2026-01-30 06:53:39.090125634 +0000 UTC m=+6354.460035921" observedRunningTime="2026-01-30 06:53:39.685111183 +0000 UTC m=+6355.055021480" watchObservedRunningTime="2026-01-30 06:53:39.699595311 +0000 UTC m=+6355.069505608" Jan 30 06:53:45 crc kubenswrapper[4931]: I0130 06:53:45.385495 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:45 crc kubenswrapper[4931]: I0130 06:53:45.386248 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:45 crc kubenswrapper[4931]: I0130 06:53:45.473356 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:45 crc kubenswrapper[4931]: I0130 06:53:45.813319 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:45 crc kubenswrapper[4931]: I0130 06:53:45.887773 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2d95k"] Jan 30 06:53:47 crc kubenswrapper[4931]: I0130 06:53:47.757131 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2d95k" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="registry-server" containerID="cri-o://501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6" gracePeriod=2 Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.291713 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.427691 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-utilities\") pod \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.427760 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jlm5\" (UniqueName: \"kubernetes.io/projected/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-kube-api-access-8jlm5\") pod \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.427914 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-catalog-content\") pod \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\" (UID: \"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf\") " Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.429033 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-utilities" (OuterVolumeSpecName: "utilities") pod "92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" (UID: "92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.436725 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-kube-api-access-8jlm5" (OuterVolumeSpecName: "kube-api-access-8jlm5") pod "92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" (UID: "92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf"). InnerVolumeSpecName "kube-api-access-8jlm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.458822 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" (UID: "92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.531303 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.531339 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jlm5\" (UniqueName: \"kubernetes.io/projected/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-kube-api-access-8jlm5\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.531354 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.800059 4931 generic.go:334] "Generic (PLEG): container finished" podID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerID="501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6" exitCode=0 Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.800104 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2d95k" event={"ID":"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf","Type":"ContainerDied","Data":"501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6"} Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.800134 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2d95k" event={"ID":"92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf","Type":"ContainerDied","Data":"833fbe327aa6d698810c9ced1db040fd1f7ae267535aeb95838f24816cb7de3f"} Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.800155 4931 scope.go:117] "RemoveContainer" containerID="501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.800291 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2d95k" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.839739 4931 scope.go:117] "RemoveContainer" containerID="0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.854512 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2d95k"] Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.869978 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2d95k"] Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.871963 4931 scope.go:117] "RemoveContainer" containerID="968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.921498 4931 scope.go:117] "RemoveContainer" containerID="501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6" Jan 30 06:53:48 crc kubenswrapper[4931]: E0130 06:53:48.921943 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6\": container with ID starting with 501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6 not found: ID does not exist" containerID="501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.921976 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6"} err="failed to get container status \"501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6\": rpc error: code = NotFound desc = could not find container \"501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6\": container with ID starting with 501a6df030fbae239f5d8272e091944ecadab7061abe37c89dc9bf780fd65ec6 not found: ID does not exist" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.921995 4931 scope.go:117] "RemoveContainer" containerID="0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe" Jan 30 06:53:48 crc kubenswrapper[4931]: E0130 06:53:48.922378 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe\": container with ID starting with 0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe not found: ID does not exist" containerID="0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.922399 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe"} err="failed to get container status \"0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe\": rpc error: code = NotFound desc = could not find container \"0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe\": container with ID starting with 0c0a77b03ffc1d974aec5d9744828fca06cfa63316b5bbba0f19f6465b5febbe not found: ID does not exist" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.922411 4931 scope.go:117] "RemoveContainer" containerID="968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218" Jan 30 06:53:48 crc kubenswrapper[4931]: E0130 06:53:48.922801 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218\": container with ID starting with 968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218 not found: ID does not exist" containerID="968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218" Jan 30 06:53:48 crc kubenswrapper[4931]: I0130 06:53:48.922833 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218"} err="failed to get container status \"968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218\": rpc error: code = NotFound desc = could not find container \"968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218\": container with ID starting with 968c183b425fecf3811291bc203004e271bc8bbe60568c2d9bc172c3e4840218 not found: ID does not exist" Jan 30 06:53:49 crc kubenswrapper[4931]: E0130 06:53:49.011335 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92b1c327_8fc5_4f2c_adf4_7c61aa9ff6cf.slice/crio-833fbe327aa6d698810c9ced1db040fd1f7ae267535aeb95838f24816cb7de3f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92b1c327_8fc5_4f2c_adf4_7c61aa9ff6cf.slice\": RecentStats: unable to find data in memory cache]" Jan 30 06:53:49 crc kubenswrapper[4931]: I0130 06:53:49.452864 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" path="/var/lib/kubelet/pods/92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf/volumes" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.347643 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faef005b-c58c-4b22-944c-defd3471fa32/init-config-reloader/0.log" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.540928 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faef005b-c58c-4b22-944c-defd3471fa32/alertmanager/0.log" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.548208 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faef005b-c58c-4b22-944c-defd3471fa32/init-config-reloader/0.log" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.575474 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_faef005b-c58c-4b22-944c-defd3471fa32/config-reloader/0.log" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.700242 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_890734fc-018f-4d2e-bc3e-ef4399f477da/aodh-api/0.log" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.823775 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_890734fc-018f-4d2e-bc3e-ef4399f477da/aodh-evaluator/0.log" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.829930 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_890734fc-018f-4d2e-bc3e-ef4399f477da/aodh-listener/0.log" Jan 30 06:53:50 crc kubenswrapper[4931]: I0130 06:53:50.876572 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_890734fc-018f-4d2e-bc3e-ef4399f477da/aodh-notifier/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.046135 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-cb1e-account-create-update-6drdw_51e6957d-e715-4a84-9952-19f773cfe882/mariadb-account-create-update/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.117525 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-db-create-fmv6s_681b527a-d511-4db8-8f19-1df02bbf9f61/mariadb-database-create/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.242614 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-db-sync-rq4fv_76eec61d-6ff6-4286-9102-758374c6fa27/aodh-db-sync/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.322182 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-847c6776d8-4sw8x_62d9ff65-c8d2-413f-b323-47a1db5ea2ed/barbican-api/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.422584 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:53:51 crc kubenswrapper[4931]: E0130 06:53:51.422844 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.463830 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-847c6776d8-4sw8x_62d9ff65-c8d2-413f-b323-47a1db5ea2ed/barbican-api-log/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.530739 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67d8db4f6b-c2v48_acbd1f75-958f-4fe5-8d52-f32c4d6c53f1/barbican-keystone-listener/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.603260 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67d8db4f6b-c2v48_acbd1f75-958f-4fe5-8d52-f32c4d6c53f1/barbican-keystone-listener-log/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.743288 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-784d6f7789-45xt8_a72d7303-20af-4fe7-be58-962eaa52c31a/barbican-worker-log/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.792063 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-784d6f7789-45xt8_a72d7303-20af-4fe7-be58-962eaa52c31a/barbican-worker/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.970431 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d49c4b54-cfb7-4264-a6b8-9ee32cc53de7/ceilometer-central-agent/0.log" Jan 30 06:53:51 crc kubenswrapper[4931]: I0130 06:53:51.972944 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d49c4b54-cfb7-4264-a6b8-9ee32cc53de7/ceilometer-notification-agent/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.048084 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d49c4b54-cfb7-4264-a6b8-9ee32cc53de7/proxy-httpd/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.102002 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d49c4b54-cfb7-4264-a6b8-9ee32cc53de7/sg-core/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.203017 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_abe3ac27-91a6-4c8d-880c-b94ad5bd7aea/cinder-api/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.344287 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_abe3ac27-91a6-4c8d-880c-b94ad5bd7aea/cinder-api-log/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.534632 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9be12b3c-c79f-4719-ab10-e3370519fbe3/cinder-backup/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.580629 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9be12b3c-c79f-4719-ab10-e3370519fbe3/probe/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.660592 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_41673f24-5c01-4401-839f-55da60930b4d/cinder-scheduler/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.832959 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_41673f24-5c01-4401-839f-55da60930b4d/probe/0.log" Jan 30 06:53:52 crc kubenswrapper[4931]: I0130 06:53:52.963088 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc/cinder-volume/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.006201 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_6ac5ad1d-f9ee-4fe6-8625-d30e49c099fc/probe/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.074552 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59977785bf-q4vw9_b05cd1de-6848-4de5-92f4-399913835db3/init/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.204261 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59977785bf-q4vw9_b05cd1de-6848-4de5-92f4-399913835db3/init/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.237762 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59977785bf-q4vw9_b05cd1de-6848-4de5-92f4-399913835db3/dnsmasq-dns/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.303369 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_94f5b24a-840b-4206-a190-63cd6339ed70/glance-httpd/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.388582 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_94f5b24a-840b-4206-a190-63cd6339ed70/glance-log/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.493044 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf/glance-httpd/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.522138 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ee8afaf3-b244-4273-8aa5-cd9bfbf10ddf/glance-log/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.694703 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-0207-account-create-update-nwwgb_4110f6ea-5daa-4a1f-8fc2-f9497b7024f7/mariadb-account-create-update/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.741815 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-795f886c68-gphf9_e3a9064f-a3e2-4734-8b77-9e42deff080a/heat-api/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.909342 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7944f98bdf-sfnzs_7094dd36-79d9-4c63-9441-1753815af4a7/heat-cfnapi/0.log" Jan 30 06:53:53 crc kubenswrapper[4931]: I0130 06:53:53.938074 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-create-md7t7_65b44b5a-7476-44a4-b7ca-e6c246e9afdc/mariadb-database-create/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.106966 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-sync-t75hv_eae9c157-1120-45ac-8d6c-cc417f364b1f/heat-db-sync/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.139985 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6d6f44c564-6wts7_78cdbc3b-0ff9-4204-b62e-bc784e3fcb87/heat-engine/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.267363 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d459c77c7-fncxw_4254a5c6-88bc-4b8f-a425-79d9bea9eb6d/horizon/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.347131 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5d459c77c7-fncxw_4254a5c6-88bc-4b8f-a425-79d9bea9eb6d/horizon-log/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.439538 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_eb1cdd0a-4520-49ce-8bc6-686dba45e7e8/kube-state-metrics/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.445949 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6bc679c867-wth9b_18835617-9ad2-4502-bbda-d4ac538081bd/keystone-api/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.682594 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-65c4-account-create-update-rfndg_037161b5-dad9-4d8f-9be4-f980ee947129/mariadb-account-create-update/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.730732 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_50c75c49-3fc8-4f3e-9af2-66535e3b49a9/manila-api/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.767174 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_50c75c49-3fc8-4f3e-9af2-66535e3b49a9/manila-api-log/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.870909 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-create-45ct9_448719bb-ff8e-4d9e-982b-a8425f907a15/mariadb-database-create/0.log" Jan 30 06:53:54 crc kubenswrapper[4931]: I0130 06:53:54.923974 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-db-sync-ctjj7_2f518288-3c69-4f3a-9e32-9f9211cab22a/manila-db-sync/0.log" Jan 30 06:53:55 crc kubenswrapper[4931]: I0130 06:53:55.066157 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_b9eb637a-1c6e-47f5-87ec-fa28c244db0b/probe/0.log" Jan 30 06:53:55 crc kubenswrapper[4931]: I0130 06:53:55.173836 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_b9eb637a-1c6e-47f5-87ec-fa28c244db0b/manila-scheduler/0.log" Jan 30 06:53:55 crc kubenswrapper[4931]: I0130 06:53:55.201223 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70/manila-share/0.log" Jan 30 06:53:55 crc kubenswrapper[4931]: I0130 06:53:55.286272 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_3e8f1fa3-1b0e-4bc4-bde3-48a12ea6de70/probe/0.log" Jan 30 06:53:55 crc kubenswrapper[4931]: I0130 06:53:55.380217 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_371cff3f-3d31-4dc6-98eb-b03f2d967337/adoption/0.log" Jan 30 06:53:55 crc kubenswrapper[4931]: I0130 06:53:55.807495 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78978fdd5c-pqg87_ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5/neutron-httpd/0.log" Jan 30 06:53:55 crc kubenswrapper[4931]: I0130 06:53:55.978688 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-78978fdd5c-pqg87_ef26e9fb-2e6d-4582-a140-1ebd8eebc9e5/neutron-api/0.log" Jan 30 06:53:56 crc kubenswrapper[4931]: I0130 06:53:56.239174 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d940c452-f401-4c40-accd-cb3178bc0490/nova-api-api/0.log" Jan 30 06:53:56 crc kubenswrapper[4931]: I0130 06:53:56.272992 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d940c452-f401-4c40-accd-cb3178bc0490/nova-api-log/0.log" Jan 30 06:53:56 crc kubenswrapper[4931]: I0130 06:53:56.349269 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_79469af6-a764-49c6-beaf-b49185c1028a/nova-cell0-conductor-conductor/0.log" Jan 30 06:53:56 crc kubenswrapper[4931]: I0130 06:53:56.661856 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a12d7d7a-0b33-425e-98be-5a28ef924b22/nova-cell1-conductor-conductor/0.log" Jan 30 06:53:56 crc kubenswrapper[4931]: I0130 06:53:56.693390 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c9e49a5c-323c-46de-b34f-2fef9465e277/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 06:53:56 crc kubenswrapper[4931]: I0130 06:53:56.983389 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_36294ba3-fcdd-45cd-b4ff-20ee280751da/nova-metadata-metadata/0.log" Jan 30 06:53:56 crc kubenswrapper[4931]: I0130 06:53:56.986539 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_36294ba3-fcdd-45cd-b4ff-20ee280751da/nova-metadata-log/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.237142 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ee8c9751-e3b5-4031-bb46-a7e5fae46f4e/nova-scheduler-scheduler/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.243096 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-79684d7c94-4r69m_eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395/init/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.398047 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-79684d7c94-4r69m_eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395/init/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.476784 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-79684d7c94-4r69m_eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395/octavia-api-provider-agent/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.610498 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-k6c7h_2c0bd14d-9378-4c91-87e8-4ec9681103e0/init/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.646841 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-79684d7c94-4r69m_eb1ddf4d-a2c5-4eae-8cd0-6eb3702c3395/octavia-api/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.849783 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-k6c7h_2c0bd14d-9378-4c91-87e8-4ec9681103e0/octavia-healthmanager/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.880493 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-fc9fv_8e6a5234-c995-4b65-afb5-e59eedb65e7f/init/0.log" Jan 30 06:53:57 crc kubenswrapper[4931]: I0130 06:53:57.908707 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-k6c7h_2c0bd14d-9378-4c91-87e8-4ec9681103e0/init/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.094987 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-fc9fv_8e6a5234-c995-4b65-afb5-e59eedb65e7f/init/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.110375 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-fc9fv_8e6a5234-c995-4b65-afb5-e59eedb65e7f/octavia-housekeeping/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.181266 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-g99g6_2c16935c-c83b-4b45-b4cd-b61f20ee764f/init/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.388677 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-g99g6_2c16935c-c83b-4b45-b4cd-b61f20ee764f/init/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.459966 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-g99g6_2c16935c-c83b-4b45-b4cd-b61f20ee764f/octavia-rsyslog/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.471058 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-68w9j_5a81fa26-7f20-43ef-922e-a9e63ee73709/init/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.730812 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-68w9j_5a81fa26-7f20-43ef-922e-a9e63ee73709/init/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.822991 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66165b19-dfc8-403f-ae09-30299db6b19f/mysql-bootstrap/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.895331 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-68w9j_5a81fa26-7f20-43ef-922e-a9e63ee73709/octavia-worker/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.938867 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66165b19-dfc8-403f-ae09-30299db6b19f/mysql-bootstrap/0.log" Jan 30 06:53:58 crc kubenswrapper[4931]: I0130 06:53:58.977728 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_66165b19-dfc8-403f-ae09-30299db6b19f/galera/0.log" Jan 30 06:53:59 crc kubenswrapper[4931]: I0130 06:53:59.126891 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3/mysql-bootstrap/0.log" Jan 30 06:53:59 crc kubenswrapper[4931]: E0130 06:53:59.249544 4931 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 30 06:53:59 crc kubenswrapper[4931]: I0130 06:53:59.573542 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3/galera/0.log" Jan 30 06:53:59 crc kubenswrapper[4931]: I0130 06:53:59.585582 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a9261635-8331-44ae-88d1-df73db930d2d/openstackclient/0.log" Jan 30 06:53:59 crc kubenswrapper[4931]: I0130 06:53:59.621006 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_01b91d81-d7c5-4bd6-8cdd-c852a5cd2fc3/mysql-bootstrap/0.log" Jan 30 06:53:59 crc kubenswrapper[4931]: I0130 06:53:59.811319 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cgvfd_9bf15e4b-1a09-401b-87e9-97cff0ee8c91/ovn-controller/0.log" Jan 30 06:53:59 crc kubenswrapper[4931]: I0130 06:53:59.880971 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cs66w_608bb576-83fd-4c7c-b8b3-a4f9ff46b661/openstack-network-exporter/0.log" Jan 30 06:53:59 crc kubenswrapper[4931]: I0130 06:53:59.997542 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5tgfr_56a7c911-151f-42ff-b005-58bdaecd5d8b/ovsdb-server-init/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.235838 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5tgfr_56a7c911-151f-42ff-b005-58bdaecd5d8b/ovsdb-server-init/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.236407 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5tgfr_56a7c911-151f-42ff-b005-58bdaecd5d8b/ovs-vswitchd/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.273100 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5tgfr_56a7c911-151f-42ff-b005-58bdaecd5d8b/ovsdb-server/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.490287 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe/openstack-network-exporter/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.509303 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_49126964-dfd0-4103-a3fd-5244d9b49b9d/adoption/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.577030 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c1e8d282-d2e1-4053-ad7e-51ee97f7ffbe/ovn-northd/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.682831 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a05ee1a7-c012-4766-8d48-3b508d4f8cd2/openstack-network-exporter/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.727543 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a05ee1a7-c012-4766-8d48-3b508d4f8cd2/ovsdbserver-nb/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.944579 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_6e651c73-1761-4cda-83b7-5a80fa3af6f4/openstack-network-exporter/0.log" Jan 30 06:54:00 crc kubenswrapper[4931]: I0130 06:54:00.968466 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_6e651c73-1761-4cda-83b7-5a80fa3af6f4/ovsdbserver-nb/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.080510 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_60983391-7945-4efe-ae6d-7c6ae80e2df8/openstack-network-exporter/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.158721 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_60983391-7945-4efe-ae6d-7c6ae80e2df8/ovsdbserver-nb/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.247356 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ffc86399-3f01-4c6a-942d-b255a957dc52/openstack-network-exporter/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.306415 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ffc86399-3f01-4c6a-942d-b255a957dc52/ovsdbserver-sb/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.336658 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f5c3365d-6967-42e2-b00c-887a82a1b73e/memcached/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.469776 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_78634c9d-d8d8-4eed-adc7-fe9fdbf69a11/openstack-network-exporter/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.476469 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_78634c9d-d8d8-4eed-adc7-fe9fdbf69a11/ovsdbserver-sb/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.528392 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_83787678-4305-4893-8aa4-d1ddd8c15343/openstack-network-exporter/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.659146 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_83787678-4305-4893-8aa4-d1ddd8c15343/ovsdbserver-sb/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.678002 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b467bdbbb-ds8j4_6359f2c1-ac0c-4084-969e-7cff11e8b4d8/placement-api/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.735766 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b467bdbbb-ds8j4_6359f2c1-ac0c-4084-969e-7cff11e8b4d8/placement-log/0.log" Jan 30 06:54:01 crc kubenswrapper[4931]: I0130 06:54:01.847775 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_201626a3-bc04-48ab-859c-5a7ffe97670e/init-config-reloader/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.009875 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_201626a3-bc04-48ab-859c-5a7ffe97670e/init-config-reloader/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.010260 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_201626a3-bc04-48ab-859c-5a7ffe97670e/thanos-sidecar/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.041360 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_201626a3-bc04-48ab-859c-5a7ffe97670e/config-reloader/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.054053 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_201626a3-bc04-48ab-859c-5a7ffe97670e/prometheus/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.214496 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ea4042a1-4ebc-4b11-a7e4-e695a668aa81/setup-container/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.362145 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ea4042a1-4ebc-4b11-a7e4-e695a668aa81/setup-container/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.400148 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8dabfefe-4927-44d0-b370-f7e28f2a4f57/setup-container/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.421917 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:54:02 crc kubenswrapper[4931]: E0130 06:54:02.422168 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.456443 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ea4042a1-4ebc-4b11-a7e4-e695a668aa81/rabbitmq/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.577987 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8dabfefe-4927-44d0-b370-f7e28f2a4f57/setup-container/0.log" Jan 30 06:54:02 crc kubenswrapper[4931]: I0130 06:54:02.788171 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8dabfefe-4927-44d0-b370-f7e28f2a4f57/rabbitmq/0.log" Jan 30 06:54:03 crc kubenswrapper[4931]: I0130 06:54:03.217108 4931 scope.go:117] "RemoveContainer" containerID="c4a34d96918d76961993d44bfa88235147b0d786fa8c13e4e74a51d5c91d0e97" Jan 30 06:54:03 crc kubenswrapper[4931]: I0130 06:54:03.246039 4931 scope.go:117] "RemoveContainer" containerID="870048acdaf12d315c0defcb750ddf7659f2ce415320432c25b469719ef6d1d5" Jan 30 06:54:13 crc kubenswrapper[4931]: I0130 06:54:13.459468 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:54:13 crc kubenswrapper[4931]: E0130 06:54:13.460294 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:54:25 crc kubenswrapper[4931]: I0130 06:54:25.283939 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff_7fc41231-569f-429f-bcc3-d7d63888874b/util/0.log" Jan 30 06:54:25 crc kubenswrapper[4931]: I0130 06:54:25.476820 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff_7fc41231-569f-429f-bcc3-d7d63888874b/util/0.log" Jan 30 06:54:25 crc kubenswrapper[4931]: I0130 06:54:25.499635 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff_7fc41231-569f-429f-bcc3-d7d63888874b/pull/0.log" Jan 30 06:54:25 crc kubenswrapper[4931]: I0130 06:54:25.499941 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff_7fc41231-569f-429f-bcc3-d7d63888874b/pull/0.log" Jan 30 06:54:25 crc kubenswrapper[4931]: I0130 06:54:25.678305 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff_7fc41231-569f-429f-bcc3-d7d63888874b/util/0.log" Jan 30 06:54:25 crc kubenswrapper[4931]: I0130 06:54:25.685745 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff_7fc41231-569f-429f-bcc3-d7d63888874b/pull/0.log" Jan 30 06:54:25 crc kubenswrapper[4931]: I0130 06:54:25.688945 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d18p2ff_7fc41231-569f-429f-bcc3-d7d63888874b/extract/0.log" Jan 30 06:54:26 crc kubenswrapper[4931]: I0130 06:54:26.002313 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-4wv6z_80b25db7-e1c2-4787-89f4-952cd7e845ba/manager/0.log" Jan 30 06:54:26 crc kubenswrapper[4931]: I0130 06:54:26.007825 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-mttxk_eb76dd84-30db-4769-852c-9a42814949d7/manager/0.log" Jan 30 06:54:26 crc kubenswrapper[4931]: I0130 06:54:26.107150 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-bf56z_dea1ae69-0c15-4228-a323-dc6f762e3c82/manager/0.log" Jan 30 06:54:26 crc kubenswrapper[4931]: I0130 06:54:26.344490 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-nsn26_2773429e-ccbb-43a4-a88a-a1cd41a63e10/manager/0.log" Jan 30 06:54:26 crc kubenswrapper[4931]: I0130 06:54:26.369482 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-lmgq2_d806e5bf-8346-46c0-a3de-5f8412e92b4f/manager/0.log" Jan 30 06:54:26 crc kubenswrapper[4931]: I0130 06:54:26.478708 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-l5dv2_ce7feb31-22f3-42d9-83b1-cd9155abae99/manager/0.log" Jan 30 06:54:26 crc kubenswrapper[4931]: I0130 06:54:26.743795 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-v9fgj_33b18ace-2da3-4bad-b093-d7db2aad7f50/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.022180 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-ddtbw_cc5025a4-0807-478d-831a-c6ed424628a9/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.032571 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-5sgtg_a3f6ed4d-518f-4415-9378-73fca072d431/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.164578 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-tzxqv_29ae7a52-ff32-4f97-8f6c-830ac4e4b40b/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.282963 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-fsdvn_8553945b-dfe3-4c77-bb73-dce58c6ad3ba/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.421659 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:54:27 crc kubenswrapper[4931]: E0130 06:54:27.422188 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.464964 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-wssqz_5e6de10d-baf2-4ef4-9acf-d093ee65c4fd/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.691572 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-5l9jv_2b83a9b3-5579-438f-8f65-effa382b726c/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.772869 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-kndp7_456074da-531d-471b-92d3-cb4ea156bfae/manager/0.log" Jan 30 06:54:27 crc kubenswrapper[4931]: I0130 06:54:27.879701 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-86dfb79cc7hfxjp_47b128c8-46ef-422c-aabc-1220f85fef83/manager/0.log" Jan 30 06:54:28 crc kubenswrapper[4931]: I0130 06:54:28.127094 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-757f46c65d-rscb9_27c443b8-82d2-41c1-b747-b89e6cb44f16/operator/0.log" Jan 30 06:54:28 crc kubenswrapper[4931]: I0130 06:54:28.718396 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7znpc_2efa198c-4fe6-4ed2-9627-14a9ce525363/registry-server/0.log" Jan 30 06:54:28 crc kubenswrapper[4931]: I0130 06:54:28.839176 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-mkk7j_a536697c-8056-4907-a09e-b23aa129435d/manager/0.log" Jan 30 06:54:29 crc kubenswrapper[4931]: I0130 06:54:29.004747 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-t4scx_59634caa-7fe0-49a1-98bf-dbc61a15f495/manager/0.log" Jan 30 06:54:29 crc kubenswrapper[4931]: I0130 06:54:29.082596 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v4vnz_ad890bc5-5b72-4833-86d5-2c022cd87e4a/operator/0.log" Jan 30 06:54:29 crc kubenswrapper[4931]: I0130 06:54:29.470025 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-gqvgs_3d63764e-5f26-4a63-870a-af0e86eb5d23/manager/0.log" Jan 30 06:54:29 crc kubenswrapper[4931]: I0130 06:54:29.732519 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-vccxr_9e5eb1e9-111a-4230-92d6-5b1fbc332ada/manager/0.log" Jan 30 06:54:29 crc kubenswrapper[4931]: I0130 06:54:29.780314 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-gqv2m_8e470db6-3785-4da2-9b83-5242d6712d6a/manager/0.log" Jan 30 06:54:29 crc kubenswrapper[4931]: I0130 06:54:29.823319 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-vqp2s_6d92f2e0-367c-428a-bcd5-cf6e5846046f/manager/0.log" Jan 30 06:54:30 crc kubenswrapper[4931]: I0130 06:54:30.080621 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b6f655c79-qpp9f_5852e12a-376e-420f-a0fd-efecae7ef623/manager/0.log" Jan 30 06:54:41 crc kubenswrapper[4931]: I0130 06:54:41.423051 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:54:41 crc kubenswrapper[4931]: E0130 06:54:41.424340 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:54:53 crc kubenswrapper[4931]: I0130 06:54:53.173329 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-w2zzb_e521b474-9f29-4841-a365-ed1589358607/control-plane-machine-set-operator/0.log" Jan 30 06:54:53 crc kubenswrapper[4931]: I0130 06:54:53.359304 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k9mcd_177d163e-7881-411f-a61b-a00e9c8bc9dc/kube-rbac-proxy/0.log" Jan 30 06:54:53 crc kubenswrapper[4931]: I0130 06:54:53.419030 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-k9mcd_177d163e-7881-411f-a61b-a00e9c8bc9dc/machine-api-operator/0.log" Jan 30 06:54:54 crc kubenswrapper[4931]: I0130 06:54:54.422652 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:54:54 crc kubenswrapper[4931]: E0130 06:54:54.423077 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:55:08 crc kubenswrapper[4931]: I0130 06:55:08.423937 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:55:08 crc kubenswrapper[4931]: E0130 06:55:08.424767 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:55:08 crc kubenswrapper[4931]: I0130 06:55:08.546046 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-8l2w4_34b0cb15-9c48-4bb3-89e7-85efd5b8b76c/cert-manager-controller/0.log" Jan 30 06:55:08 crc kubenswrapper[4931]: I0130 06:55:08.782348 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-hsrfm_5049b2a6-f85e-4250-9b12-c70705adaf35/cert-manager-webhook/0.log" Jan 30 06:55:08 crc kubenswrapper[4931]: I0130 06:55:08.791988 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-qvz8h_39da06e0-e9ea-4570-b486-3c0d2fe79820/cert-manager-cainjector/0.log" Jan 30 06:55:20 crc kubenswrapper[4931]: I0130 06:55:20.422098 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:55:20 crc kubenswrapper[4931]: E0130 06:55:20.422817 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:55:24 crc kubenswrapper[4931]: I0130 06:55:24.480095 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mwzz2_8800ae15-51ee-4310-889d-3608008986bd/nmstate-console-plugin/0.log" Jan 30 06:55:24 crc kubenswrapper[4931]: I0130 06:55:24.686182 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-6mhzq_66e77bed-ca3a-4cfe-874c-d6874c52ab0e/nmstate-handler/0.log" Jan 30 06:55:24 crc kubenswrapper[4931]: I0130 06:55:24.788251 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2z4jr_01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c/kube-rbac-proxy/0.log" Jan 30 06:55:24 crc kubenswrapper[4931]: I0130 06:55:24.870903 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-2z4jr_01e6ed8f-a69f-4e32-b275-6ea9a5cebf1c/nmstate-metrics/0.log" Jan 30 06:55:24 crc kubenswrapper[4931]: I0130 06:55:24.982885 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-5tdhq_1c291268-6fc4-48a1-94dc-1e9e052e7bc6/nmstate-operator/0.log" Jan 30 06:55:25 crc kubenswrapper[4931]: I0130 06:55:25.059008 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-krf2l_a115b68a-a9ad-44db-90f5-1f016556956a/nmstate-webhook/0.log" Jan 30 06:55:33 crc kubenswrapper[4931]: I0130 06:55:33.422628 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:55:33 crc kubenswrapper[4931]: E0130 06:55:33.423281 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:55:41 crc kubenswrapper[4931]: I0130 06:55:41.032912 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lx27m_2668098b-064f-4807-b2ee-7efb5dc89fb8/prometheus-operator/0.log" Jan 30 06:55:41 crc kubenswrapper[4931]: I0130 06:55:41.242304 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr_38907dab-62b6-4364-b48c-8300b1fa2ad2/prometheus-operator-admission-webhook/0.log" Jan 30 06:55:41 crc kubenswrapper[4931]: I0130 06:55:41.299938 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z_119a1b91-5877-408e-8721-dccac5a05367/prometheus-operator-admission-webhook/0.log" Jan 30 06:55:41 crc kubenswrapper[4931]: I0130 06:55:41.448319 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qm276_c63c0b3f-7290-4318-8db6-a1ae150b22e0/operator/0.log" Jan 30 06:55:41 crc kubenswrapper[4931]: I0130 06:55:41.497855 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gw297_55072f8e-c1ef-45fd-9ec3-43e74afed3a7/perses-operator/0.log" Jan 30 06:55:45 crc kubenswrapper[4931]: I0130 06:55:45.428230 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:55:45 crc kubenswrapper[4931]: E0130 06:55:45.428996 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.286970 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-g5mxs_c9c06e8c-f207-490b-8bea-d6a742d63e72/kube-rbac-proxy/0.log" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.422060 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:55:58 crc kubenswrapper[4931]: E0130 06:55:58.422611 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.542369 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-frr-files/0.log" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.803013 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-metrics/0.log" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.806110 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-g5mxs_c9c06e8c-f207-490b-8bea-d6a742d63e72/controller/0.log" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.810014 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-reloader/0.log" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.817854 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-frr-files/0.log" Jan 30 06:55:58 crc kubenswrapper[4931]: I0130 06:55:58.995923 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-reloader/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.202978 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-frr-files/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.234397 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-reloader/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.236899 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-metrics/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.237214 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-metrics/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.480465 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-metrics/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.499116 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-reloader/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.499156 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/cp-frr-files/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.538666 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/controller/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.696389 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/frr-metrics/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.759456 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/kube-rbac-proxy/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.785218 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/kube-rbac-proxy-frr/0.log" Jan 30 06:55:59 crc kubenswrapper[4931]: I0130 06:55:59.895027 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/reloader/0.log" Jan 30 06:56:00 crc kubenswrapper[4931]: I0130 06:56:00.027074 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-56ftz_3ed7b42a-5ea1-4c7e-b3bc-a0431a72696e/frr-k8s-webhook-server/0.log" Jan 30 06:56:00 crc kubenswrapper[4931]: I0130 06:56:00.184923 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6969d469fc-rzjqg_164111f5-1bd4-4fc2-84f5-7418ee6e7e62/manager/0.log" Jan 30 06:56:00 crc kubenswrapper[4931]: I0130 06:56:00.352999 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7659bb7b4d-ssrqf_47321851-ef2d-47a3-949a-58f2e87df8dd/webhook-server/0.log" Jan 30 06:56:00 crc kubenswrapper[4931]: I0130 06:56:00.503922 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rcpl2_f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18/kube-rbac-proxy/0.log" Jan 30 06:56:01 crc kubenswrapper[4931]: I0130 06:56:01.245858 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rcpl2_f9e7a8e0-b04b-4d38-a05c-a9baa66d2c18/speaker/0.log" Jan 30 06:56:02 crc kubenswrapper[4931]: I0130 06:56:02.042563 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-768qr_be5b19a8-200f-462e-b8f2-fc956ec52080/frr/0.log" Jan 30 06:56:10 crc kubenswrapper[4931]: I0130 06:56:10.422654 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:56:10 crc kubenswrapper[4931]: E0130 06:56:10.424546 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:56:11 crc kubenswrapper[4931]: I0130 06:56:11.065631 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-0207-account-create-update-nwwgb"] Jan 30 06:56:11 crc kubenswrapper[4931]: I0130 06:56:11.077506 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-md7t7"] Jan 30 06:56:11 crc kubenswrapper[4931]: I0130 06:56:11.090781 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-md7t7"] Jan 30 06:56:11 crc kubenswrapper[4931]: I0130 06:56:11.101499 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-0207-account-create-update-nwwgb"] Jan 30 06:56:11 crc kubenswrapper[4931]: I0130 06:56:11.435967 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4110f6ea-5daa-4a1f-8fc2-f9497b7024f7" path="/var/lib/kubelet/pods/4110f6ea-5daa-4a1f-8fc2-f9497b7024f7/volumes" Jan 30 06:56:11 crc kubenswrapper[4931]: I0130 06:56:11.438087 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b44b5a-7476-44a4-b7ca-e6c246e9afdc" path="/var/lib/kubelet/pods/65b44b5a-7476-44a4-b7ca-e6c246e9afdc/volumes" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.006629 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6_150d0383-4876-424e-b189-6ce3cceccb72/util/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.155005 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6_150d0383-4876-424e-b189-6ce3cceccb72/pull/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.157201 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6_150d0383-4876-424e-b189-6ce3cceccb72/util/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.197543 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6_150d0383-4876-424e-b189-6ce3cceccb72/pull/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.348211 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6_150d0383-4876-424e-b189-6ce3cceccb72/pull/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.362911 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6_150d0383-4876-424e-b189-6ce3cceccb72/util/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.372208 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6bjp6_150d0383-4876-424e-b189-6ce3cceccb72/extract/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.529580 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4_52241d6a-5526-4d2b-baeb-e1fd0361a188/util/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.704611 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4_52241d6a-5526-4d2b-baeb-e1fd0361a188/pull/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.710090 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4_52241d6a-5526-4d2b-baeb-e1fd0361a188/pull/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.724787 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4_52241d6a-5526-4d2b-baeb-e1fd0361a188/util/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.895945 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4_52241d6a-5526-4d2b-baeb-e1fd0361a188/util/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.915313 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4_52241d6a-5526-4d2b-baeb-e1fd0361a188/pull/0.log" Jan 30 06:56:16 crc kubenswrapper[4931]: I0130 06:56:16.946785 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134nvt4_52241d6a-5526-4d2b-baeb-e1fd0361a188/extract/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.107850 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd_686d3bad-998e-4688-a556-c25a0770810a/util/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.289820 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd_686d3bad-998e-4688-a556-c25a0770810a/pull/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.337921 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd_686d3bad-998e-4688-a556-c25a0770810a/pull/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.347530 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd_686d3bad-998e-4688-a556-c25a0770810a/util/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.506930 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd_686d3bad-998e-4688-a556-c25a0770810a/util/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.524629 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd_686d3bad-998e-4688-a556-c25a0770810a/extract/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.547719 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5tdhwd_686d3bad-998e-4688-a556-c25a0770810a/pull/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.701649 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll_8db6c802-44ea-48b4-a63f-c6c43492e6bc/util/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.873015 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll_8db6c802-44ea-48b4-a63f-c6c43492e6bc/util/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.908577 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll_8db6c802-44ea-48b4-a63f-c6c43492e6bc/pull/0.log" Jan 30 06:56:17 crc kubenswrapper[4931]: I0130 06:56:17.914051 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll_8db6c802-44ea-48b4-a63f-c6c43492e6bc/pull/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.030939 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll_8db6c802-44ea-48b4-a63f-c6c43492e6bc/util/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.058266 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll_8db6c802-44ea-48b4-a63f-c6c43492e6bc/pull/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.092043 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gknll_8db6c802-44ea-48b4-a63f-c6c43492e6bc/extract/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.222963 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wn8rd_f88493be-1e8e-47b8-9ac7-d035ba0b6e36/extract-utilities/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.399812 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wn8rd_f88493be-1e8e-47b8-9ac7-d035ba0b6e36/extract-content/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.427228 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wn8rd_f88493be-1e8e-47b8-9ac7-d035ba0b6e36/extract-utilities/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.434664 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wn8rd_f88493be-1e8e-47b8-9ac7-d035ba0b6e36/extract-content/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.572498 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wn8rd_f88493be-1e8e-47b8-9ac7-d035ba0b6e36/extract-utilities/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.591083 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wn8rd_f88493be-1e8e-47b8-9ac7-d035ba0b6e36/extract-content/0.log" Jan 30 06:56:18 crc kubenswrapper[4931]: I0130 06:56:18.790897 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw8bs_9591e541-c3a7-4565-a829-b3da700f84ff/extract-utilities/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.026537 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw8bs_9591e541-c3a7-4565-a829-b3da700f84ff/extract-utilities/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.059089 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw8bs_9591e541-c3a7-4565-a829-b3da700f84ff/extract-content/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.097118 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw8bs_9591e541-c3a7-4565-a829-b3da700f84ff/extract-content/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.256177 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw8bs_9591e541-c3a7-4565-a829-b3da700f84ff/extract-content/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.295160 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw8bs_9591e541-c3a7-4565-a829-b3da700f84ff/extract-utilities/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.327210 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wn8rd_f88493be-1e8e-47b8-9ac7-d035ba0b6e36/registry-server/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.465212 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ng75v_29014adb-d772-451f-b4bf-9fdb5d417d1e/marketplace-operator/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.541272 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w6b74_5aacb80d-976e-4059-9c84-857aab618f4e/extract-utilities/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.746352 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w6b74_5aacb80d-976e-4059-9c84-857aab618f4e/extract-content/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.755820 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w6b74_5aacb80d-976e-4059-9c84-857aab618f4e/extract-content/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.766262 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw8bs_9591e541-c3a7-4565-a829-b3da700f84ff/registry-server/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.798542 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w6b74_5aacb80d-976e-4059-9c84-857aab618f4e/extract-utilities/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.964312 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kg222_4c0c107d-a03c-479f-b127-2824affd9b35/extract-utilities/0.log" Jan 30 06:56:19 crc kubenswrapper[4931]: I0130 06:56:19.977664 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w6b74_5aacb80d-976e-4059-9c84-857aab618f4e/extract-utilities/0.log" Jan 30 06:56:20 crc kubenswrapper[4931]: I0130 06:56:20.015853 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w6b74_5aacb80d-976e-4059-9c84-857aab618f4e/extract-content/0.log" Jan 30 06:56:20 crc kubenswrapper[4931]: I0130 06:56:20.186891 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w6b74_5aacb80d-976e-4059-9c84-857aab618f4e/registry-server/0.log" Jan 30 06:56:20 crc kubenswrapper[4931]: I0130 06:56:20.236261 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kg222_4c0c107d-a03c-479f-b127-2824affd9b35/extract-utilities/0.log" Jan 30 06:56:20 crc kubenswrapper[4931]: I0130 06:56:20.236562 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kg222_4c0c107d-a03c-479f-b127-2824affd9b35/extract-content/0.log" Jan 30 06:56:20 crc kubenswrapper[4931]: I0130 06:56:20.246658 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kg222_4c0c107d-a03c-479f-b127-2824affd9b35/extract-content/0.log" Jan 30 06:56:20 crc kubenswrapper[4931]: I0130 06:56:20.414280 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kg222_4c0c107d-a03c-479f-b127-2824affd9b35/extract-content/0.log" Jan 30 06:56:20 crc kubenswrapper[4931]: I0130 06:56:20.425205 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kg222_4c0c107d-a03c-479f-b127-2824affd9b35/extract-utilities/0.log" Jan 30 06:56:21 crc kubenswrapper[4931]: I0130 06:56:21.142541 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kg222_4c0c107d-a03c-479f-b127-2824affd9b35/registry-server/0.log" Jan 30 06:56:23 crc kubenswrapper[4931]: I0130 06:56:23.051544 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-t75hv"] Jan 30 06:56:23 crc kubenswrapper[4931]: I0130 06:56:23.063871 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-t75hv"] Jan 30 06:56:23 crc kubenswrapper[4931]: I0130 06:56:23.435088 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae9c157-1120-45ac-8d6c-cc417f364b1f" path="/var/lib/kubelet/pods/eae9c157-1120-45ac-8d6c-cc417f364b1f/volumes" Jan 30 06:56:24 crc kubenswrapper[4931]: I0130 06:56:24.423807 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:56:24 crc kubenswrapper[4931]: E0130 06:56:24.424248 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:56:34 crc kubenswrapper[4931]: I0130 06:56:34.063543 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-689f7d46dd-nd6wr_38907dab-62b6-4364-b48c-8300b1fa2ad2/prometheus-operator-admission-webhook/0.log" Jan 30 06:56:34 crc kubenswrapper[4931]: I0130 06:56:34.069098 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lx27m_2668098b-064f-4807-b2ee-7efb5dc89fb8/prometheus-operator/0.log" Jan 30 06:56:34 crc kubenswrapper[4931]: I0130 06:56:34.086451 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-689f7d46dd-vv89z_119a1b91-5877-408e-8721-dccac5a05367/prometheus-operator-admission-webhook/0.log" Jan 30 06:56:34 crc kubenswrapper[4931]: I0130 06:56:34.258328 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-gw297_55072f8e-c1ef-45fd-9ec3-43e74afed3a7/perses-operator/0.log" Jan 30 06:56:34 crc kubenswrapper[4931]: I0130 06:56:34.271517 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-qm276_c63c0b3f-7290-4318-8db6-a1ae150b22e0/operator/0.log" Jan 30 06:56:37 crc kubenswrapper[4931]: I0130 06:56:37.422947 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:56:37 crc kubenswrapper[4931]: E0130 06:56:37.423702 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:56:50 crc kubenswrapper[4931]: I0130 06:56:50.422478 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:56:50 crc kubenswrapper[4931]: E0130 06:56:50.423411 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:57:01 crc kubenswrapper[4931]: E0130 06:57:01.722725 4931 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.179:47192->38.102.83.179:45103: write tcp 38.102.83.179:47192->38.102.83.179:45103: write: broken pipe Jan 30 06:57:03 crc kubenswrapper[4931]: I0130 06:57:03.423370 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:57:03 crc kubenswrapper[4931]: E0130 06:57:03.424189 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:57:03 crc kubenswrapper[4931]: I0130 06:57:03.459654 4931 scope.go:117] "RemoveContainer" containerID="0c6e2269ccd94b91b1bc61c0d6038a0f312e1ff979dd42b9e25c99edd027ce3a" Jan 30 06:57:03 crc kubenswrapper[4931]: I0130 06:57:03.520455 4931 scope.go:117] "RemoveContainer" containerID="21f9a5808ddea5a133a2d53b441ae213a916b33040640a7576f4d7163df3f19d" Jan 30 06:57:03 crc kubenswrapper[4931]: I0130 06:57:03.565247 4931 scope.go:117] "RemoveContainer" containerID="461da2cabb65077a09c290e33233aae28ff5843458cdbe68b5fe17f6c78dd05f" Jan 30 06:57:18 crc kubenswrapper[4931]: I0130 06:57:18.421847 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:57:18 crc kubenswrapper[4931]: E0130 06:57:18.422726 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:57:30 crc kubenswrapper[4931]: I0130 06:57:30.422304 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:57:30 crc kubenswrapper[4931]: E0130 06:57:30.423270 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:57:44 crc kubenswrapper[4931]: I0130 06:57:44.423127 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:57:44 crc kubenswrapper[4931]: E0130 06:57:44.424161 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:57:59 crc kubenswrapper[4931]: I0130 06:57:59.422203 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:57:59 crc kubenswrapper[4931]: E0130 06:57:59.423294 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:58:06 crc kubenswrapper[4931]: I0130 06:58:06.898695 4931 generic.go:334] "Generic (PLEG): container finished" podID="1075a448-992c-4364-842b-06dda255cd42" containerID="f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92" exitCode=0 Jan 30 06:58:06 crc kubenswrapper[4931]: I0130 06:58:06.898924 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-57fqf/must-gather-v9ths" event={"ID":"1075a448-992c-4364-842b-06dda255cd42","Type":"ContainerDied","Data":"f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92"} Jan 30 06:58:06 crc kubenswrapper[4931]: I0130 06:58:06.899880 4931 scope.go:117] "RemoveContainer" containerID="f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92" Jan 30 06:58:07 crc kubenswrapper[4931]: I0130 06:58:07.378237 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57fqf_must-gather-v9ths_1075a448-992c-4364-842b-06dda255cd42/gather/0.log" Jan 30 06:58:14 crc kubenswrapper[4931]: I0130 06:58:14.421953 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:58:14 crc kubenswrapper[4931]: E0130 06:58:14.422701 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:58:14 crc kubenswrapper[4931]: I0130 06:58:14.976192 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-57fqf/must-gather-v9ths"] Jan 30 06:58:14 crc kubenswrapper[4931]: I0130 06:58:14.976480 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-57fqf/must-gather-v9ths" podUID="1075a448-992c-4364-842b-06dda255cd42" containerName="copy" containerID="cri-o://b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942" gracePeriod=2 Jan 30 06:58:14 crc kubenswrapper[4931]: I0130 06:58:14.987269 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-57fqf/must-gather-v9ths"] Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.454009 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57fqf_must-gather-v9ths_1075a448-992c-4364-842b-06dda255cd42/copy/0.log" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.456081 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.565064 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcnzx\" (UniqueName: \"kubernetes.io/projected/1075a448-992c-4364-842b-06dda255cd42-kube-api-access-wcnzx\") pod \"1075a448-992c-4364-842b-06dda255cd42\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.565400 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1075a448-992c-4364-842b-06dda255cd42-must-gather-output\") pod \"1075a448-992c-4364-842b-06dda255cd42\" (UID: \"1075a448-992c-4364-842b-06dda255cd42\") " Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.574782 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1075a448-992c-4364-842b-06dda255cd42-kube-api-access-wcnzx" (OuterVolumeSpecName: "kube-api-access-wcnzx") pod "1075a448-992c-4364-842b-06dda255cd42" (UID: "1075a448-992c-4364-842b-06dda255cd42"). InnerVolumeSpecName "kube-api-access-wcnzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.667935 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcnzx\" (UniqueName: \"kubernetes.io/projected/1075a448-992c-4364-842b-06dda255cd42-kube-api-access-wcnzx\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.740758 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1075a448-992c-4364-842b-06dda255cd42-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1075a448-992c-4364-842b-06dda255cd42" (UID: "1075a448-992c-4364-842b-06dda255cd42"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.769711 4931 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1075a448-992c-4364-842b-06dda255cd42-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.998226 4931 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-57fqf_must-gather-v9ths_1075a448-992c-4364-842b-06dda255cd42/copy/0.log" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.998716 4931 generic.go:334] "Generic (PLEG): container finished" podID="1075a448-992c-4364-842b-06dda255cd42" containerID="b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942" exitCode=143 Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.998852 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-57fqf/must-gather-v9ths" Jan 30 06:58:15 crc kubenswrapper[4931]: I0130 06:58:15.998884 4931 scope.go:117] "RemoveContainer" containerID="b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942" Jan 30 06:58:16 crc kubenswrapper[4931]: I0130 06:58:16.034895 4931 scope.go:117] "RemoveContainer" containerID="f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92" Jan 30 06:58:16 crc kubenswrapper[4931]: I0130 06:58:16.134370 4931 scope.go:117] "RemoveContainer" containerID="b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942" Jan 30 06:58:16 crc kubenswrapper[4931]: E0130 06:58:16.135175 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942\": container with ID starting with b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942 not found: ID does not exist" containerID="b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942" Jan 30 06:58:16 crc kubenswrapper[4931]: I0130 06:58:16.135286 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942"} err="failed to get container status \"b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942\": rpc error: code = NotFound desc = could not find container \"b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942\": container with ID starting with b26cd51fcb25ad66a348c9e69c9d67a17e9cbab697a628c6367a7c23833c1942 not found: ID does not exist" Jan 30 06:58:16 crc kubenswrapper[4931]: I0130 06:58:16.135366 4931 scope.go:117] "RemoveContainer" containerID="f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92" Jan 30 06:58:16 crc kubenswrapper[4931]: E0130 06:58:16.135822 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92\": container with ID starting with f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92 not found: ID does not exist" containerID="f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92" Jan 30 06:58:16 crc kubenswrapper[4931]: I0130 06:58:16.135900 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92"} err="failed to get container status \"f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92\": rpc error: code = NotFound desc = could not find container \"f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92\": container with ID starting with f882eeb17f43412896a11902692366cd99cf2b3a65c45cfc4ea200e7ab314d92 not found: ID does not exist" Jan 30 06:58:17 crc kubenswrapper[4931]: I0130 06:58:17.438346 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1075a448-992c-4364-842b-06dda255cd42" path="/var/lib/kubelet/pods/1075a448-992c-4364-842b-06dda255cd42/volumes" Jan 30 06:58:26 crc kubenswrapper[4931]: I0130 06:58:26.423007 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:58:26 crc kubenswrapper[4931]: E0130 06:58:26.424111 4931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wfdxs_openshift-machine-config-operator(189be3dc-d439-47c2-b1f2-7413fc4b5e85)\"" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.190067 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6fq22"] Jan 30 06:58:27 crc kubenswrapper[4931]: E0130 06:58:27.191283 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="extract-utilities" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191301 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="extract-utilities" Jan 30 06:58:27 crc kubenswrapper[4931]: E0130 06:58:27.191328 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1075a448-992c-4364-842b-06dda255cd42" containerName="copy" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191336 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1075a448-992c-4364-842b-06dda255cd42" containerName="copy" Jan 30 06:58:27 crc kubenswrapper[4931]: E0130 06:58:27.191353 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="registry-server" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191362 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="registry-server" Jan 30 06:58:27 crc kubenswrapper[4931]: E0130 06:58:27.191375 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="extract-content" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191383 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="extract-content" Jan 30 06:58:27 crc kubenswrapper[4931]: E0130 06:58:27.191417 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1075a448-992c-4364-842b-06dda255cd42" containerName="gather" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191460 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="1075a448-992c-4364-842b-06dda255cd42" containerName="gather" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191722 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b1c327-8fc5-4f2c-adf4-7c61aa9ff6cf" containerName="registry-server" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191734 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1075a448-992c-4364-842b-06dda255cd42" containerName="copy" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.191755 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="1075a448-992c-4364-842b-06dda255cd42" containerName="gather" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.195558 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.219552 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fq22"] Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.360810 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-utilities\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.360949 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-catalog-content\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.361012 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcbxb\" (UniqueName: \"kubernetes.io/projected/81aaff5d-9686-458d-bd32-221d0ae71038-kube-api-access-hcbxb\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.463452 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-utilities\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.463567 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-catalog-content\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.463598 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcbxb\" (UniqueName: \"kubernetes.io/projected/81aaff5d-9686-458d-bd32-221d0ae71038-kube-api-access-hcbxb\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.464253 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-utilities\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.464319 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-catalog-content\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.486180 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcbxb\" (UniqueName: \"kubernetes.io/projected/81aaff5d-9686-458d-bd32-221d0ae71038-kube-api-access-hcbxb\") pod \"certified-operators-6fq22\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:27 crc kubenswrapper[4931]: I0130 06:58:27.521639 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:28 crc kubenswrapper[4931]: I0130 06:58:28.133537 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fq22"] Jan 30 06:58:28 crc kubenswrapper[4931]: I0130 06:58:28.156086 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fq22" event={"ID":"81aaff5d-9686-458d-bd32-221d0ae71038","Type":"ContainerStarted","Data":"44778904158eedde6f889cfc7e7aa952fb69ead9dd240f8b4485e99e02abc043"} Jan 30 06:58:29 crc kubenswrapper[4931]: I0130 06:58:29.168210 4931 generic.go:334] "Generic (PLEG): container finished" podID="81aaff5d-9686-458d-bd32-221d0ae71038" containerID="f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b" exitCode=0 Jan 30 06:58:29 crc kubenswrapper[4931]: I0130 06:58:29.168302 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fq22" event={"ID":"81aaff5d-9686-458d-bd32-221d0ae71038","Type":"ContainerDied","Data":"f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b"} Jan 30 06:58:29 crc kubenswrapper[4931]: I0130 06:58:29.171294 4931 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 06:58:31 crc kubenswrapper[4931]: I0130 06:58:31.203081 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fq22" event={"ID":"81aaff5d-9686-458d-bd32-221d0ae71038","Type":"ContainerStarted","Data":"af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0"} Jan 30 06:58:31 crc kubenswrapper[4931]: I0130 06:58:31.990629 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cc279"] Jan 30 06:58:31 crc kubenswrapper[4931]: I0130 06:58:31.996855 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.006644 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc279"] Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.089499 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-catalog-content\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.089646 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dgkk\" (UniqueName: \"kubernetes.io/projected/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-kube-api-access-7dgkk\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.089978 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-utilities\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.192104 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-utilities\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.192306 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-catalog-content\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.192342 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dgkk\" (UniqueName: \"kubernetes.io/projected/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-kube-api-access-7dgkk\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.192800 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-utilities\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.192947 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-catalog-content\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.213280 4931 generic.go:334] "Generic (PLEG): container finished" podID="81aaff5d-9686-458d-bd32-221d0ae71038" containerID="af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0" exitCode=0 Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.213331 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fq22" event={"ID":"81aaff5d-9686-458d-bd32-221d0ae71038","Type":"ContainerDied","Data":"af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0"} Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.224574 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dgkk\" (UniqueName: \"kubernetes.io/projected/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-kube-api-access-7dgkk\") pod \"redhat-operators-cc279\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.332093 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:32 crc kubenswrapper[4931]: I0130 06:58:32.826524 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc279"] Jan 30 06:58:33 crc kubenswrapper[4931]: I0130 06:58:33.227637 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc279" event={"ID":"dc9d97e5-c723-4f3e-b6f4-9e123f907f07","Type":"ContainerDied","Data":"b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c"} Jan 30 06:58:33 crc kubenswrapper[4931]: I0130 06:58:33.227495 4931 generic.go:334] "Generic (PLEG): container finished" podID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerID="b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c" exitCode=0 Jan 30 06:58:33 crc kubenswrapper[4931]: I0130 06:58:33.231061 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc279" event={"ID":"dc9d97e5-c723-4f3e-b6f4-9e123f907f07","Type":"ContainerStarted","Data":"be5ecc3b88dee9a9394ab880d76a6331d57c03091370e6e5e590da9fe25fb187"} Jan 30 06:58:33 crc kubenswrapper[4931]: I0130 06:58:33.235076 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fq22" event={"ID":"81aaff5d-9686-458d-bd32-221d0ae71038","Type":"ContainerStarted","Data":"89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4"} Jan 30 06:58:33 crc kubenswrapper[4931]: I0130 06:58:33.289250 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6fq22" podStartSLOduration=2.641569818 podStartE2EDuration="6.289207806s" podCreationTimestamp="2026-01-30 06:58:27 +0000 UTC" firstStartedPulling="2026-01-30 06:58:29.171079184 +0000 UTC m=+6644.540989441" lastFinishedPulling="2026-01-30 06:58:32.818717172 +0000 UTC m=+6648.188627429" observedRunningTime="2026-01-30 06:58:33.270349486 +0000 UTC m=+6648.640259753" watchObservedRunningTime="2026-01-30 06:58:33.289207806 +0000 UTC m=+6648.659118073" Jan 30 06:58:35 crc kubenswrapper[4931]: I0130 06:58:35.257785 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc279" event={"ID":"dc9d97e5-c723-4f3e-b6f4-9e123f907f07","Type":"ContainerStarted","Data":"7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe"} Jan 30 06:58:37 crc kubenswrapper[4931]: I0130 06:58:37.522523 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:37 crc kubenswrapper[4931]: I0130 06:58:37.523153 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:37 crc kubenswrapper[4931]: I0130 06:58:37.601705 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:38 crc kubenswrapper[4931]: I0130 06:58:38.384115 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:38 crc kubenswrapper[4931]: I0130 06:58:38.423000 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d" Jan 30 06:58:39 crc kubenswrapper[4931]: I0130 06:58:39.312050 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"e8299e6538c113448d4bfd5d8d4cfd1ff286a1ac24b25d9c331a09fd1c9ae275"} Jan 30 06:58:40 crc kubenswrapper[4931]: I0130 06:58:40.325493 4931 generic.go:334] "Generic (PLEG): container finished" podID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerID="7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe" exitCode=0 Jan 30 06:58:40 crc kubenswrapper[4931]: I0130 06:58:40.325832 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc279" event={"ID":"dc9d97e5-c723-4f3e-b6f4-9e123f907f07","Type":"ContainerDied","Data":"7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe"} Jan 30 06:58:40 crc kubenswrapper[4931]: I0130 06:58:40.370344 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fq22"] Jan 30 06:58:40 crc kubenswrapper[4931]: I0130 06:58:40.370789 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6fq22" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="registry-server" containerID="cri-o://89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4" gracePeriod=2 Jan 30 06:58:40 crc kubenswrapper[4931]: I0130 06:58:40.985951 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.129243 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-catalog-content\") pod \"81aaff5d-9686-458d-bd32-221d0ae71038\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.129500 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcbxb\" (UniqueName: \"kubernetes.io/projected/81aaff5d-9686-458d-bd32-221d0ae71038-kube-api-access-hcbxb\") pod \"81aaff5d-9686-458d-bd32-221d0ae71038\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.129680 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-utilities\") pod \"81aaff5d-9686-458d-bd32-221d0ae71038\" (UID: \"81aaff5d-9686-458d-bd32-221d0ae71038\") " Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.130670 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-utilities" (OuterVolumeSpecName: "utilities") pod "81aaff5d-9686-458d-bd32-221d0ae71038" (UID: "81aaff5d-9686-458d-bd32-221d0ae71038"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.148133 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81aaff5d-9686-458d-bd32-221d0ae71038-kube-api-access-hcbxb" (OuterVolumeSpecName: "kube-api-access-hcbxb") pod "81aaff5d-9686-458d-bd32-221d0ae71038" (UID: "81aaff5d-9686-458d-bd32-221d0ae71038"). InnerVolumeSpecName "kube-api-access-hcbxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.185957 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81aaff5d-9686-458d-bd32-221d0ae71038" (UID: "81aaff5d-9686-458d-bd32-221d0ae71038"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.233149 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.233199 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcbxb\" (UniqueName: \"kubernetes.io/projected/81aaff5d-9686-458d-bd32-221d0ae71038-kube-api-access-hcbxb\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.233224 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81aaff5d-9686-458d-bd32-221d0ae71038-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.340119 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc279" event={"ID":"dc9d97e5-c723-4f3e-b6f4-9e123f907f07","Type":"ContainerStarted","Data":"4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973"} Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.345236 4931 generic.go:334] "Generic (PLEG): container finished" podID="81aaff5d-9686-458d-bd32-221d0ae71038" containerID="89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4" exitCode=0 Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.345289 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fq22" event={"ID":"81aaff5d-9686-458d-bd32-221d0ae71038","Type":"ContainerDied","Data":"89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4"} Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.345321 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fq22" event={"ID":"81aaff5d-9686-458d-bd32-221d0ae71038","Type":"ContainerDied","Data":"44778904158eedde6f889cfc7e7aa952fb69ead9dd240f8b4485e99e02abc043"} Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.345342 4931 scope.go:117] "RemoveContainer" containerID="89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.345686 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fq22" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.374323 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cc279" podStartSLOduration=2.80266755 podStartE2EDuration="10.374306865s" podCreationTimestamp="2026-01-30 06:58:31 +0000 UTC" firstStartedPulling="2026-01-30 06:58:33.230893805 +0000 UTC m=+6648.600804062" lastFinishedPulling="2026-01-30 06:58:40.80253313 +0000 UTC m=+6656.172443377" observedRunningTime="2026-01-30 06:58:41.372270518 +0000 UTC m=+6656.742180785" watchObservedRunningTime="2026-01-30 06:58:41.374306865 +0000 UTC m=+6656.744217122" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.376872 4931 scope.go:117] "RemoveContainer" containerID="af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.396547 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fq22"] Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.412866 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6fq22"] Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.428459 4931 scope.go:117] "RemoveContainer" containerID="f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.438172 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" path="/var/lib/kubelet/pods/81aaff5d-9686-458d-bd32-221d0ae71038/volumes" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.464842 4931 scope.go:117] "RemoveContainer" containerID="89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4" Jan 30 06:58:41 crc kubenswrapper[4931]: E0130 06:58:41.465224 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4\": container with ID starting with 89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4 not found: ID does not exist" containerID="89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.465265 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4"} err="failed to get container status \"89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4\": rpc error: code = NotFound desc = could not find container \"89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4\": container with ID starting with 89295dd9de1f3812fa2d886ac85237f5a1f7e9aeb59b7748f94d7dfed35106a4 not found: ID does not exist" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.465289 4931 scope.go:117] "RemoveContainer" containerID="af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0" Jan 30 06:58:41 crc kubenswrapper[4931]: E0130 06:58:41.465880 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0\": container with ID starting with af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0 not found: ID does not exist" containerID="af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.465904 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0"} err="failed to get container status \"af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0\": rpc error: code = NotFound desc = could not find container \"af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0\": container with ID starting with af4105548c114f658f29af4ea5a153d29b084d50d483937fb38f078b1c091db0 not found: ID does not exist" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.465920 4931 scope.go:117] "RemoveContainer" containerID="f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b" Jan 30 06:58:41 crc kubenswrapper[4931]: E0130 06:58:41.466222 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b\": container with ID starting with f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b not found: ID does not exist" containerID="f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b" Jan 30 06:58:41 crc kubenswrapper[4931]: I0130 06:58:41.466263 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b"} err="failed to get container status \"f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b\": rpc error: code = NotFound desc = could not find container \"f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b\": container with ID starting with f78ce1b9e6636a9258d6ae0a8b9483212143bc4b1f12d27b352d9926d4e7c28b not found: ID does not exist" Jan 30 06:58:42 crc kubenswrapper[4931]: I0130 06:58:42.332943 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:42 crc kubenswrapper[4931]: I0130 06:58:42.333011 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:43 crc kubenswrapper[4931]: I0130 06:58:43.396307 4931 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cc279" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="registry-server" probeResult="failure" output=< Jan 30 06:58:43 crc kubenswrapper[4931]: timeout: failed to connect service ":50051" within 1s Jan 30 06:58:43 crc kubenswrapper[4931]: > Jan 30 06:58:52 crc kubenswrapper[4931]: I0130 06:58:52.397930 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:52 crc kubenswrapper[4931]: I0130 06:58:52.489549 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:53 crc kubenswrapper[4931]: I0130 06:58:53.365595 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc279"] Jan 30 06:58:53 crc kubenswrapper[4931]: I0130 06:58:53.486597 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cc279" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="registry-server" containerID="cri-o://4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973" gracePeriod=2 Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.067331 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.245484 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-utilities\") pod \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.245724 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-catalog-content\") pod \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.245836 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dgkk\" (UniqueName: \"kubernetes.io/projected/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-kube-api-access-7dgkk\") pod \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\" (UID: \"dc9d97e5-c723-4f3e-b6f4-9e123f907f07\") " Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.246871 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-utilities" (OuterVolumeSpecName: "utilities") pod "dc9d97e5-c723-4f3e-b6f4-9e123f907f07" (UID: "dc9d97e5-c723-4f3e-b6f4-9e123f907f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.247484 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.258017 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-kube-api-access-7dgkk" (OuterVolumeSpecName: "kube-api-access-7dgkk") pod "dc9d97e5-c723-4f3e-b6f4-9e123f907f07" (UID: "dc9d97e5-c723-4f3e-b6f4-9e123f907f07"). InnerVolumeSpecName "kube-api-access-7dgkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.349395 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dgkk\" (UniqueName: \"kubernetes.io/projected/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-kube-api-access-7dgkk\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.429112 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc9d97e5-c723-4f3e-b6f4-9e123f907f07" (UID: "dc9d97e5-c723-4f3e-b6f4-9e123f907f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.451151 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9d97e5-c723-4f3e-b6f4-9e123f907f07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.495929 4931 generic.go:334] "Generic (PLEG): container finished" podID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerID="4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973" exitCode=0 Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.495967 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc279" event={"ID":"dc9d97e5-c723-4f3e-b6f4-9e123f907f07","Type":"ContainerDied","Data":"4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973"} Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.495991 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc279" event={"ID":"dc9d97e5-c723-4f3e-b6f4-9e123f907f07","Type":"ContainerDied","Data":"be5ecc3b88dee9a9394ab880d76a6331d57c03091370e6e5e590da9fe25fb187"} Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.496007 4931 scope.go:117] "RemoveContainer" containerID="4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.496115 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc279" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.528391 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc279"] Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.530383 4931 scope.go:117] "RemoveContainer" containerID="7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.536507 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cc279"] Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.553743 4931 scope.go:117] "RemoveContainer" containerID="b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.596684 4931 scope.go:117] "RemoveContainer" containerID="4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973" Jan 30 06:58:54 crc kubenswrapper[4931]: E0130 06:58:54.600852 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973\": container with ID starting with 4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973 not found: ID does not exist" containerID="4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.600897 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973"} err="failed to get container status \"4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973\": rpc error: code = NotFound desc = could not find container \"4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973\": container with ID starting with 4dcb7c8bd35030c7351b2cf2466e370f7e0c1d928b94b7b9c1c6f286ea77b973 not found: ID does not exist" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.600922 4931 scope.go:117] "RemoveContainer" containerID="7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe" Jan 30 06:58:54 crc kubenswrapper[4931]: E0130 06:58:54.601158 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe\": container with ID starting with 7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe not found: ID does not exist" containerID="7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.601212 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe"} err="failed to get container status \"7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe\": rpc error: code = NotFound desc = could not find container \"7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe\": container with ID starting with 7c15be5aea842f2fe5e46c76deae434bd626900eab27b89366443b81f3113abe not found: ID does not exist" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.601228 4931 scope.go:117] "RemoveContainer" containerID="b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c" Jan 30 06:58:54 crc kubenswrapper[4931]: E0130 06:58:54.601479 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c\": container with ID starting with b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c not found: ID does not exist" containerID="b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c" Jan 30 06:58:54 crc kubenswrapper[4931]: I0130 06:58:54.601532 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c"} err="failed to get container status \"b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c\": rpc error: code = NotFound desc = could not find container \"b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c\": container with ID starting with b264fc6a6786432333a6351c5fbe74da5fe62b0440da5331f8bd02e9cb9d8e9c not found: ID does not exist" Jan 30 06:58:55 crc kubenswrapper[4931]: I0130 06:58:55.449000 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" path="/var/lib/kubelet/pods/dc9d97e5-c723-4f3e-b6f4-9e123f907f07/volumes" Jan 30 06:59:03 crc kubenswrapper[4931]: I0130 06:59:03.718074 4931 scope.go:117] "RemoveContainer" containerID="a1eade7d298ab8964a28bb2ee51b88b39f3ffa30229e7d9814ffc7f1e58b96ec" Jan 30 06:59:12 crc kubenswrapper[4931]: I0130 06:59:12.070451 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-fmv6s"] Jan 30 06:59:12 crc kubenswrapper[4931]: I0130 06:59:12.083738 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-cb1e-account-create-update-6drdw"] Jan 30 06:59:12 crc kubenswrapper[4931]: I0130 06:59:12.094574 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-fmv6s"] Jan 30 06:59:12 crc kubenswrapper[4931]: I0130 06:59:12.104821 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-cb1e-account-create-update-6drdw"] Jan 30 06:59:13 crc kubenswrapper[4931]: I0130 06:59:13.441550 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e6957d-e715-4a84-9952-19f773cfe882" path="/var/lib/kubelet/pods/51e6957d-e715-4a84-9952-19f773cfe882/volumes" Jan 30 06:59:13 crc kubenswrapper[4931]: I0130 06:59:13.445063 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="681b527a-d511-4db8-8f19-1df02bbf9f61" path="/var/lib/kubelet/pods/681b527a-d511-4db8-8f19-1df02bbf9f61/volumes" Jan 30 06:59:24 crc kubenswrapper[4931]: I0130 06:59:24.050411 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-rq4fv"] Jan 30 06:59:24 crc kubenswrapper[4931]: I0130 06:59:24.058702 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-rq4fv"] Jan 30 06:59:25 crc kubenswrapper[4931]: I0130 06:59:25.446218 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76eec61d-6ff6-4286-9102-758374c6fa27" path="/var/lib/kubelet/pods/76eec61d-6ff6-4286-9102-758374c6fa27/volumes" Jan 30 06:59:49 crc kubenswrapper[4931]: I0130 06:59:49.088556 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-65c4-account-create-update-rfndg"] Jan 30 06:59:49 crc kubenswrapper[4931]: I0130 06:59:49.109309 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-45ct9"] Jan 30 06:59:49 crc kubenswrapper[4931]: I0130 06:59:49.124691 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-65c4-account-create-update-rfndg"] Jan 30 06:59:49 crc kubenswrapper[4931]: I0130 06:59:49.131830 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-45ct9"] Jan 30 06:59:49 crc kubenswrapper[4931]: I0130 06:59:49.437910 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037161b5-dad9-4d8f-9be4-f980ee947129" path="/var/lib/kubelet/pods/037161b5-dad9-4d8f-9be4-f980ee947129/volumes" Jan 30 06:59:49 crc kubenswrapper[4931]: I0130 06:59:49.438739 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448719bb-ff8e-4d9e-982b-a8425f907a15" path="/var/lib/kubelet/pods/448719bb-ff8e-4d9e-982b-a8425f907a15/volumes" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.184360 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9"] Jan 30 07:00:00 crc kubenswrapper[4931]: E0130 07:00:00.185290 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="extract-content" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185305 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="extract-content" Jan 30 07:00:00 crc kubenswrapper[4931]: E0130 07:00:00.185321 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="extract-utilities" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185329 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="extract-utilities" Jan 30 07:00:00 crc kubenswrapper[4931]: E0130 07:00:00.185347 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="extract-content" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185355 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="extract-content" Jan 30 07:00:00 crc kubenswrapper[4931]: E0130 07:00:00.185387 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="extract-utilities" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185395 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="extract-utilities" Jan 30 07:00:00 crc kubenswrapper[4931]: E0130 07:00:00.185409 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="registry-server" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185417 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="registry-server" Jan 30 07:00:00 crc kubenswrapper[4931]: E0130 07:00:00.185464 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="registry-server" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185472 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="registry-server" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185716 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="81aaff5d-9686-458d-bd32-221d0ae71038" containerName="registry-server" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.185737 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9d97e5-c723-4f3e-b6f4-9e123f907f07" containerName="registry-server" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.186571 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.189500 4931 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.189513 4931 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.211064 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9"] Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.365192 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-config-volume\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.365261 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-secret-volume\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.365309 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdzp\" (UniqueName: \"kubernetes.io/projected/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-kube-api-access-5tdzp\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.467579 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-config-volume\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.467628 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-secret-volume\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.467656 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tdzp\" (UniqueName: \"kubernetes.io/projected/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-kube-api-access-5tdzp\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.470296 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-config-volume\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.479193 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-secret-volume\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.495536 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tdzp\" (UniqueName: \"kubernetes.io/projected/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-kube-api-access-5tdzp\") pod \"collect-profiles-29495940-2zjc9\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:00 crc kubenswrapper[4931]: I0130 07:00:00.511300 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:01 crc kubenswrapper[4931]: I0130 07:00:01.046873 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9"] Jan 30 07:00:01 crc kubenswrapper[4931]: I0130 07:00:01.246404 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" event={"ID":"6677b63a-e1ff-4d6f-9b6f-d74e60885d60","Type":"ContainerStarted","Data":"cc7e299fb616f245f3fd8ebbffc8bbb401b10a331200fb770a781891bf8cf695"} Jan 30 07:00:02 crc kubenswrapper[4931]: I0130 07:00:02.259222 4931 generic.go:334] "Generic (PLEG): container finished" podID="6677b63a-e1ff-4d6f-9b6f-d74e60885d60" containerID="6d6a598573685f7bcd09f1bb9e1195b900bc135c468f8a7919e2b7f988b0e7aa" exitCode=0 Jan 30 07:00:02 crc kubenswrapper[4931]: I0130 07:00:02.259287 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" event={"ID":"6677b63a-e1ff-4d6f-9b6f-d74e60885d60","Type":"ContainerDied","Data":"6d6a598573685f7bcd09f1bb9e1195b900bc135c468f8a7919e2b7f988b0e7aa"} Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.555037 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-97qh7"] Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.558235 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.584007 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97qh7"] Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.747968 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tvnj\" (UniqueName: \"kubernetes.io/projected/d11b6770-bb41-4825-90a4-3b4af2daecd9-kube-api-access-8tvnj\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.748455 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-catalog-content\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.748644 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-utilities\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.750120 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.850035 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-config-volume\") pod \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.850275 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-secret-volume\") pod \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.850315 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tdzp\" (UniqueName: \"kubernetes.io/projected/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-kube-api-access-5tdzp\") pod \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\" (UID: \"6677b63a-e1ff-4d6f-9b6f-d74e60885d60\") " Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.850759 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-catalog-content\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.850869 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-utilities\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.850962 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tvnj\" (UniqueName: \"kubernetes.io/projected/d11b6770-bb41-4825-90a4-3b4af2daecd9-kube-api-access-8tvnj\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.851072 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-config-volume" (OuterVolumeSpecName: "config-volume") pod "6677b63a-e1ff-4d6f-9b6f-d74e60885d60" (UID: "6677b63a-e1ff-4d6f-9b6f-d74e60885d60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.851286 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-catalog-content\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.851344 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-utilities\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.851607 4931 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.857133 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-kube-api-access-5tdzp" (OuterVolumeSpecName: "kube-api-access-5tdzp") pod "6677b63a-e1ff-4d6f-9b6f-d74e60885d60" (UID: "6677b63a-e1ff-4d6f-9b6f-d74e60885d60"). InnerVolumeSpecName "kube-api-access-5tdzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.857628 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6677b63a-e1ff-4d6f-9b6f-d74e60885d60" (UID: "6677b63a-e1ff-4d6f-9b6f-d74e60885d60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.878511 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tvnj\" (UniqueName: \"kubernetes.io/projected/d11b6770-bb41-4825-90a4-3b4af2daecd9-kube-api-access-8tvnj\") pod \"community-operators-97qh7\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.883485 4931 scope.go:117] "RemoveContainer" containerID="0e4d3615364adb9fc327ac5ce20cdd4fecf281a043a844859ed5dca539ce5720" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.901225 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.954044 4931 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:03 crc kubenswrapper[4931]: I0130 07:00:03.954073 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tdzp\" (UniqueName: \"kubernetes.io/projected/6677b63a-e1ff-4d6f-9b6f-d74e60885d60-kube-api-access-5tdzp\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.034123 4931 scope.go:117] "RemoveContainer" containerID="7c8becae24c7a8a33bf584e1ab34512a30cd0f1208b8f42cb257da9c6245e6c8" Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.079998 4931 scope.go:117] "RemoveContainer" containerID="113fabb6410c6ac50615d981ed7da97e3148c88e8fc0cf34f88de6f851a2a62e" Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.139018 4931 scope.go:117] "RemoveContainer" containerID="030be6de81f263d984b02a8d10e7722844ea7978d675c59a14a66ccbbd2666b2" Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.168074 4931 scope.go:117] "RemoveContainer" containerID="80ab6efc7f6dcfb70eed703ea54962d42118f91ddd843c75b9238af6658827ba" Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.285538 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" event={"ID":"6677b63a-e1ff-4d6f-9b6f-d74e60885d60","Type":"ContainerDied","Data":"cc7e299fb616f245f3fd8ebbffc8bbb401b10a331200fb770a781891bf8cf695"} Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.285580 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7e299fb616f245f3fd8ebbffc8bbb401b10a331200fb770a781891bf8cf695" Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.285600 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495940-2zjc9" Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.413340 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97qh7"] Jan 30 07:00:04 crc kubenswrapper[4931]: W0130 07:00:04.417045 4931 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd11b6770_bb41_4825_90a4_3b4af2daecd9.slice/crio-530b9dd20a883a453a06f3f485065d62bc9fa1539abe4bb60cd79097ceb450b3 WatchSource:0}: Error finding container 530b9dd20a883a453a06f3f485065d62bc9fa1539abe4bb60cd79097ceb450b3: Status 404 returned error can't find the container with id 530b9dd20a883a453a06f3f485065d62bc9fa1539abe4bb60cd79097ceb450b3 Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.835734 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8"] Jan 30 07:00:04 crc kubenswrapper[4931]: I0130 07:00:04.848768 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495895-64nq8"] Jan 30 07:00:05 crc kubenswrapper[4931]: I0130 07:00:05.300177 4931 generic.go:334] "Generic (PLEG): container finished" podID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerID="f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b" exitCode=0 Jan 30 07:00:05 crc kubenswrapper[4931]: I0130 07:00:05.300266 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97qh7" event={"ID":"d11b6770-bb41-4825-90a4-3b4af2daecd9","Type":"ContainerDied","Data":"f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b"} Jan 30 07:00:05 crc kubenswrapper[4931]: I0130 07:00:05.300364 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97qh7" event={"ID":"d11b6770-bb41-4825-90a4-3b4af2daecd9","Type":"ContainerStarted","Data":"530b9dd20a883a453a06f3f485065d62bc9fa1539abe4bb60cd79097ceb450b3"} Jan 30 07:00:05 crc kubenswrapper[4931]: I0130 07:00:05.436840 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d1c4a0-d36c-47d4-b603-3320c87f7c8e" path="/var/lib/kubelet/pods/f7d1c4a0-d36c-47d4-b603-3320c87f7c8e/volumes" Jan 30 07:00:07 crc kubenswrapper[4931]: I0130 07:00:07.322888 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97qh7" event={"ID":"d11b6770-bb41-4825-90a4-3b4af2daecd9","Type":"ContainerStarted","Data":"7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a"} Jan 30 07:00:08 crc kubenswrapper[4931]: I0130 07:00:08.337370 4931 generic.go:334] "Generic (PLEG): container finished" podID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerID="7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a" exitCode=0 Jan 30 07:00:08 crc kubenswrapper[4931]: I0130 07:00:08.337589 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97qh7" event={"ID":"d11b6770-bb41-4825-90a4-3b4af2daecd9","Type":"ContainerDied","Data":"7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a"} Jan 30 07:00:09 crc kubenswrapper[4931]: I0130 07:00:09.051045 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-ctjj7"] Jan 30 07:00:09 crc kubenswrapper[4931]: I0130 07:00:09.060757 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-ctjj7"] Jan 30 07:00:09 crc kubenswrapper[4931]: I0130 07:00:09.351528 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97qh7" event={"ID":"d11b6770-bb41-4825-90a4-3b4af2daecd9","Type":"ContainerStarted","Data":"1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8"} Jan 30 07:00:09 crc kubenswrapper[4931]: I0130 07:00:09.377090 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-97qh7" podStartSLOduration=2.967603374 podStartE2EDuration="6.377065167s" podCreationTimestamp="2026-01-30 07:00:03 +0000 UTC" firstStartedPulling="2026-01-30 07:00:05.301798352 +0000 UTC m=+6740.671708609" lastFinishedPulling="2026-01-30 07:00:08.711260145 +0000 UTC m=+6744.081170402" observedRunningTime="2026-01-30 07:00:09.372440367 +0000 UTC m=+6744.742350624" watchObservedRunningTime="2026-01-30 07:00:09.377065167 +0000 UTC m=+6744.746975434" Jan 30 07:00:09 crc kubenswrapper[4931]: I0130 07:00:09.433165 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f518288-3c69-4f3a-9e32-9f9211cab22a" path="/var/lib/kubelet/pods/2f518288-3c69-4f3a-9e32-9f9211cab22a/volumes" Jan 30 07:00:13 crc kubenswrapper[4931]: I0130 07:00:13.902101 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:13 crc kubenswrapper[4931]: I0130 07:00:13.902565 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:13 crc kubenswrapper[4931]: I0130 07:00:13.992694 4931 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:14 crc kubenswrapper[4931]: I0130 07:00:14.475375 4931 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:14 crc kubenswrapper[4931]: I0130 07:00:14.539391 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-97qh7"] Jan 30 07:00:16 crc kubenswrapper[4931]: I0130 07:00:16.430796 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-97qh7" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="registry-server" containerID="cri-o://1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8" gracePeriod=2 Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.014353 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.171133 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-utilities\") pod \"d11b6770-bb41-4825-90a4-3b4af2daecd9\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.171457 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-catalog-content\") pod \"d11b6770-bb41-4825-90a4-3b4af2daecd9\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.171550 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tvnj\" (UniqueName: \"kubernetes.io/projected/d11b6770-bb41-4825-90a4-3b4af2daecd9-kube-api-access-8tvnj\") pod \"d11b6770-bb41-4825-90a4-3b4af2daecd9\" (UID: \"d11b6770-bb41-4825-90a4-3b4af2daecd9\") " Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.172259 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-utilities" (OuterVolumeSpecName: "utilities") pod "d11b6770-bb41-4825-90a4-3b4af2daecd9" (UID: "d11b6770-bb41-4825-90a4-3b4af2daecd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.189601 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11b6770-bb41-4825-90a4-3b4af2daecd9-kube-api-access-8tvnj" (OuterVolumeSpecName: "kube-api-access-8tvnj") pod "d11b6770-bb41-4825-90a4-3b4af2daecd9" (UID: "d11b6770-bb41-4825-90a4-3b4af2daecd9"). InnerVolumeSpecName "kube-api-access-8tvnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.275815 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tvnj\" (UniqueName: \"kubernetes.io/projected/d11b6770-bb41-4825-90a4-3b4af2daecd9-kube-api-access-8tvnj\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.275852 4931 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.417853 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d11b6770-bb41-4825-90a4-3b4af2daecd9" (UID: "d11b6770-bb41-4825-90a4-3b4af2daecd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.456731 4931 generic.go:334] "Generic (PLEG): container finished" podID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerID="1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8" exitCode=0 Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.456776 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97qh7" event={"ID":"d11b6770-bb41-4825-90a4-3b4af2daecd9","Type":"ContainerDied","Data":"1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8"} Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.456807 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97qh7" event={"ID":"d11b6770-bb41-4825-90a4-3b4af2daecd9","Type":"ContainerDied","Data":"530b9dd20a883a453a06f3f485065d62bc9fa1539abe4bb60cd79097ceb450b3"} Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.456826 4931 scope.go:117] "RemoveContainer" containerID="1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.457010 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97qh7" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.481295 4931 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11b6770-bb41-4825-90a4-3b4af2daecd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.508089 4931 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-97qh7"] Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.512823 4931 scope.go:117] "RemoveContainer" containerID="7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.515588 4931 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-97qh7"] Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.536264 4931 scope.go:117] "RemoveContainer" containerID="f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.588905 4931 scope.go:117] "RemoveContainer" containerID="1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8" Jan 30 07:00:17 crc kubenswrapper[4931]: E0130 07:00:17.589403 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8\": container with ID starting with 1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8 not found: ID does not exist" containerID="1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.589544 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8"} err="failed to get container status \"1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8\": rpc error: code = NotFound desc = could not find container \"1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8\": container with ID starting with 1ca690e5fa631c6708487e76962fb8c404eb1fe8ac6981253fd5468444515de8 not found: ID does not exist" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.589570 4931 scope.go:117] "RemoveContainer" containerID="7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a" Jan 30 07:00:17 crc kubenswrapper[4931]: E0130 07:00:17.589955 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a\": container with ID starting with 7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a not found: ID does not exist" containerID="7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.589981 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a"} err="failed to get container status \"7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a\": rpc error: code = NotFound desc = could not find container \"7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a\": container with ID starting with 7fa3fd6b0b8a185936b360a7a6337b2b60aaadfd47734a302a50db94babeb38a not found: ID does not exist" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.589996 4931 scope.go:117] "RemoveContainer" containerID="f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b" Jan 30 07:00:17 crc kubenswrapper[4931]: E0130 07:00:17.590281 4931 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b\": container with ID starting with f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b not found: ID does not exist" containerID="f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b" Jan 30 07:00:17 crc kubenswrapper[4931]: I0130 07:00:17.590314 4931 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b"} err="failed to get container status \"f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b\": rpc error: code = NotFound desc = could not find container \"f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b\": container with ID starting with f7ec50cafb0866f182190f0d39f8e5ed6d5a522707e1a662b4113d83ea844e3b not found: ID does not exist" Jan 30 07:00:19 crc kubenswrapper[4931]: I0130 07:00:19.438233 4931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" path="/var/lib/kubelet/pods/d11b6770-bb41-4825-90a4-3b4af2daecd9/volumes" Jan 30 07:00:57 crc kubenswrapper[4931]: I0130 07:00:57.362562 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:00:57 crc kubenswrapper[4931]: I0130 07:00:57.363226 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.159641 4931 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495941-bp296"] Jan 30 07:01:00 crc kubenswrapper[4931]: E0130 07:01:00.160389 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="registry-server" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.160409 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="registry-server" Jan 30 07:01:00 crc kubenswrapper[4931]: E0130 07:01:00.160472 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="extract-utilities" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.160481 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="extract-utilities" Jan 30 07:01:00 crc kubenswrapper[4931]: E0130 07:01:00.160512 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6677b63a-e1ff-4d6f-9b6f-d74e60885d60" containerName="collect-profiles" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.160521 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="6677b63a-e1ff-4d6f-9b6f-d74e60885d60" containerName="collect-profiles" Jan 30 07:01:00 crc kubenswrapper[4931]: E0130 07:01:00.160535 4931 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="extract-content" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.160543 4931 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="extract-content" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.160787 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11b6770-bb41-4825-90a4-3b4af2daecd9" containerName="registry-server" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.160813 4931 memory_manager.go:354] "RemoveStaleState removing state" podUID="6677b63a-e1ff-4d6f-9b6f-d74e60885d60" containerName="collect-profiles" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.161825 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.189793 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495941-bp296"] Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.245606 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzd8\" (UniqueName: \"kubernetes.io/projected/c3963310-007b-4e75-9a1f-6e84507084c3-kube-api-access-pjzd8\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.245663 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-fernet-keys\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.245765 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-combined-ca-bundle\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.245796 4931 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-config-data\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.348739 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-combined-ca-bundle\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.348827 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-config-data\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.349037 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzd8\" (UniqueName: \"kubernetes.io/projected/c3963310-007b-4e75-9a1f-6e84507084c3-kube-api-access-pjzd8\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.349092 4931 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-fernet-keys\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.357773 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-fernet-keys\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.358537 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-config-data\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.359728 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-combined-ca-bundle\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.371856 4931 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzd8\" (UniqueName: \"kubernetes.io/projected/c3963310-007b-4e75-9a1f-6e84507084c3-kube-api-access-pjzd8\") pod \"keystone-cron-29495941-bp296\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.545125 4931 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:00 crc kubenswrapper[4931]: I0130 07:01:00.992513 4931 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495941-bp296"] Jan 30 07:01:01 crc kubenswrapper[4931]: I0130 07:01:01.997994 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-bp296" event={"ID":"c3963310-007b-4e75-9a1f-6e84507084c3","Type":"ContainerStarted","Data":"09e69bd89e7ad5ed2585b6b1cd788defcf2c6ec2028c23926ae8ca0f752a9f20"} Jan 30 07:01:02 crc kubenswrapper[4931]: I0130 07:01:01.998874 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-bp296" event={"ID":"c3963310-007b-4e75-9a1f-6e84507084c3","Type":"ContainerStarted","Data":"6cdb452b95e20b6acbd95299d7131d4cf88076d5ac5b21f2922a236dafd43211"} Jan 30 07:01:02 crc kubenswrapper[4931]: I0130 07:01:02.025457 4931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29495941-bp296" podStartSLOduration=2.025413828 podStartE2EDuration="2.025413828s" podCreationTimestamp="2026-01-30 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 07:01:02.023546736 +0000 UTC m=+6797.393457023" watchObservedRunningTime="2026-01-30 07:01:02.025413828 +0000 UTC m=+6797.395324125" Jan 30 07:01:04 crc kubenswrapper[4931]: I0130 07:01:04.282868 4931 scope.go:117] "RemoveContainer" containerID="d9b4e7ab55cdff59c11d247a35f15b900c5c2d23ac2be2cf5caa19378305d01e" Jan 30 07:01:04 crc kubenswrapper[4931]: I0130 07:01:04.329239 4931 scope.go:117] "RemoveContainer" containerID="e300d33068406baea942af2b5b021d10a35ce639099354dd534b82d9b9278f4c" Jan 30 07:01:05 crc kubenswrapper[4931]: I0130 07:01:05.026590 4931 generic.go:334] "Generic (PLEG): container finished" podID="c3963310-007b-4e75-9a1f-6e84507084c3" containerID="09e69bd89e7ad5ed2585b6b1cd788defcf2c6ec2028c23926ae8ca0f752a9f20" exitCode=0 Jan 30 07:01:05 crc kubenswrapper[4931]: I0130 07:01:05.026644 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-bp296" event={"ID":"c3963310-007b-4e75-9a1f-6e84507084c3","Type":"ContainerDied","Data":"09e69bd89e7ad5ed2585b6b1cd788defcf2c6ec2028c23926ae8ca0f752a9f20"} Jan 30 07:01:06 crc kubenswrapper[4931]: E0130 07:01:06.307730 4931 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.504362 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.597470 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-fernet-keys\") pod \"c3963310-007b-4e75-9a1f-6e84507084c3\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.609448 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c3963310-007b-4e75-9a1f-6e84507084c3" (UID: "c3963310-007b-4e75-9a1f-6e84507084c3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.616481 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-combined-ca-bundle\") pod \"c3963310-007b-4e75-9a1f-6e84507084c3\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.616556 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-config-data\") pod \"c3963310-007b-4e75-9a1f-6e84507084c3\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.616687 4931 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjzd8\" (UniqueName: \"kubernetes.io/projected/c3963310-007b-4e75-9a1f-6e84507084c3-kube-api-access-pjzd8\") pod \"c3963310-007b-4e75-9a1f-6e84507084c3\" (UID: \"c3963310-007b-4e75-9a1f-6e84507084c3\") " Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.617870 4931 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.631871 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3963310-007b-4e75-9a1f-6e84507084c3-kube-api-access-pjzd8" (OuterVolumeSpecName: "kube-api-access-pjzd8") pod "c3963310-007b-4e75-9a1f-6e84507084c3" (UID: "c3963310-007b-4e75-9a1f-6e84507084c3"). InnerVolumeSpecName "kube-api-access-pjzd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.660599 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3963310-007b-4e75-9a1f-6e84507084c3" (UID: "c3963310-007b-4e75-9a1f-6e84507084c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.711539 4931 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-config-data" (OuterVolumeSpecName: "config-data") pod "c3963310-007b-4e75-9a1f-6e84507084c3" (UID: "c3963310-007b-4e75-9a1f-6e84507084c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.719938 4931 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.720003 4931 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3963310-007b-4e75-9a1f-6e84507084c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:06 crc kubenswrapper[4931]: I0130 07:01:06.720031 4931 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjzd8\" (UniqueName: \"kubernetes.io/projected/c3963310-007b-4e75-9a1f-6e84507084c3-kube-api-access-pjzd8\") on node \"crc\" DevicePath \"\"" Jan 30 07:01:07 crc kubenswrapper[4931]: I0130 07:01:07.058308 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495941-bp296" event={"ID":"c3963310-007b-4e75-9a1f-6e84507084c3","Type":"ContainerDied","Data":"6cdb452b95e20b6acbd95299d7131d4cf88076d5ac5b21f2922a236dafd43211"} Jan 30 07:01:07 crc kubenswrapper[4931]: I0130 07:01:07.058364 4931 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cdb452b95e20b6acbd95299d7131d4cf88076d5ac5b21f2922a236dafd43211" Jan 30 07:01:07 crc kubenswrapper[4931]: I0130 07:01:07.058495 4931 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495941-bp296" Jan 30 07:01:27 crc kubenswrapper[4931]: I0130 07:01:27.362919 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:01:27 crc kubenswrapper[4931]: I0130 07:01:27.363351 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.362730 4931 patch_prober.go:28] interesting pod/machine-config-daemon-wfdxs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.363386 4931 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.363501 4931 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.364586 4931 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8299e6538c113448d4bfd5d8d4cfd1ff286a1ac24b25d9c331a09fd1c9ae275"} pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.364668 4931 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" podUID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerName="machine-config-daemon" containerID="cri-o://e8299e6538c113448d4bfd5d8d4cfd1ff286a1ac24b25d9c331a09fd1c9ae275" gracePeriod=600 Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.723153 4931 generic.go:334] "Generic (PLEG): container finished" podID="189be3dc-d439-47c2-b1f2-7413fc4b5e85" containerID="e8299e6538c113448d4bfd5d8d4cfd1ff286a1ac24b25d9c331a09fd1c9ae275" exitCode=0 Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.723278 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerDied","Data":"e8299e6538c113448d4bfd5d8d4cfd1ff286a1ac24b25d9c331a09fd1c9ae275"} Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.723875 4931 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wfdxs" event={"ID":"189be3dc-d439-47c2-b1f2-7413fc4b5e85","Type":"ContainerStarted","Data":"22b60e2f05f30864d2598292b5a5136957888853e8221f9434b6c9fef14ee97c"} Jan 30 07:01:57 crc kubenswrapper[4931]: I0130 07:01:57.723911 4931 scope.go:117] "RemoveContainer" containerID="1ff15d5aac3f4c0764845266c3dd85db97427d881e5c701d217d7599df18276d"